00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4015 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3610 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.061 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.061 The recommended git tool is: git 00:00:00.062 using credential 00000000-0000-0000-0000-000000000002 00:00:00.063 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.086 Fetching changes from the remote Git repository 00:00:00.087 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.126 Using shallow fetch with depth 1 00:00:00.126 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.126 > git --version # timeout=10 00:00:00.166 > git --version # 'git version 2.39.2' 00:00:00.166 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.193 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.193 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.457 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.468 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.479 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.479 > git config core.sparsecheckout # timeout=10 00:00:05.489 > git read-tree -mu HEAD # timeout=10 00:00:05.503 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.518 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.518 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.602 [Pipeline] Start of Pipeline 00:00:05.613 [Pipeline] library 00:00:05.614 Loading library shm_lib@master 00:00:05.614 Library shm_lib@master is cached. Copying from home. 00:00:05.626 [Pipeline] node 00:00:05.640 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.641 [Pipeline] { 00:00:05.648 [Pipeline] catchError 00:00:05.649 [Pipeline] { 00:00:05.658 [Pipeline] wrap 00:00:05.663 [Pipeline] { 00:00:05.669 [Pipeline] stage 00:00:05.670 [Pipeline] { (Prologue) 00:00:05.681 [Pipeline] echo 00:00:05.682 Node: VM-host-SM38 00:00:05.686 [Pipeline] cleanWs 00:00:05.695 [WS-CLEANUP] Deleting project workspace... 00:00:05.695 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.702 [WS-CLEANUP] done 00:00:05.874 [Pipeline] setCustomBuildProperty 00:00:05.958 [Pipeline] httpRequest 00:00:06.846 [Pipeline] echo 00:00:06.847 Sorcerer 10.211.164.101 is alive 00:00:06.855 [Pipeline] retry 00:00:06.857 [Pipeline] { 00:00:06.868 [Pipeline] httpRequest 00:00:06.873 HttpMethod: GET 00:00:06.874 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.875 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.889 Response Code: HTTP/1.1 200 OK 00:00:06.890 Success: Status code 200 is in the accepted range: 200,404 00:00:06.890 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.943 [Pipeline] } 00:00:07.958 [Pipeline] // retry 00:00:07.963 [Pipeline] sh 00:00:08.249 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.264 [Pipeline] httpRequest 00:00:08.949 [Pipeline] echo 00:00:08.950 Sorcerer 10.211.164.101 is alive 00:00:08.961 [Pipeline] retry 00:00:08.963 [Pipeline] { 00:00:08.978 [Pipeline] httpRequest 00:00:08.983 HttpMethod: GET 00:00:08.984 URL: http://10.211.164.101/packages/spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:00:08.984 Sending request to url: http://10.211.164.101/packages/spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:00:09.004 Response Code: HTTP/1.1 200 OK 00:00:09.004 Success: Status code 200 is in the accepted range: 200,404 00:00:09.005 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:01:51.777 [Pipeline] } 00:01:51.794 [Pipeline] // retry 00:01:51.801 [Pipeline] sh 00:01:52.086 + tar --no-same-owner -xf spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:01:54.639 [Pipeline] sh 00:01:54.924 + git -C spdk log --oneline -n5 00:01:54.924 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:01:54.924 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:01:54.924 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:01:54.924 fb6c49f2f bdev: add spdk_bdev_get_nvme_nsid() 00:01:54.924 427304da7 lib/reduce: Reset req->reduce_errno 00:01:54.944 [Pipeline] withCredentials 00:01:54.956 > git --version # timeout=10 00:01:54.969 > git --version # 'git version 2.39.2' 00:01:54.985 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:54.987 [Pipeline] { 00:01:54.996 [Pipeline] retry 00:01:54.998 [Pipeline] { 00:01:55.013 [Pipeline] sh 00:01:55.297 + git ls-remote http://dpdk.org/git/dpdk main 00:01:55.309 [Pipeline] } 00:01:55.327 [Pipeline] // retry 00:01:55.332 [Pipeline] } 00:01:55.348 [Pipeline] // withCredentials 00:01:55.358 [Pipeline] httpRequest 00:01:55.868 [Pipeline] echo 00:01:55.870 Sorcerer 10.211.164.101 is alive 00:01:55.880 [Pipeline] retry 00:01:55.882 [Pipeline] { 00:01:55.897 [Pipeline] httpRequest 00:01:55.903 HttpMethod: GET 00:01:55.903 URL: http://10.211.164.101/packages/dpdk_64f27886b8bf127cd365a8a3ed5c05852a5ae81d.tar.gz 00:01:55.904 Sending request to url: http://10.211.164.101/packages/dpdk_64f27886b8bf127cd365a8a3ed5c05852a5ae81d.tar.gz 00:01:55.912 Response Code: HTTP/1.1 200 OK 00:01:55.913 Success: Status code 200 is in the accepted range: 200,404 00:01:55.914 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_64f27886b8bf127cd365a8a3ed5c05852a5ae81d.tar.gz 00:02:08.936 [Pipeline] } 00:02:08.952 [Pipeline] // retry 00:02:08.960 [Pipeline] sh 00:02:09.242 + tar --no-same-owner -xf dpdk_64f27886b8bf127cd365a8a3ed5c05852a5ae81d.tar.gz 00:02:10.640 [Pipeline] sh 00:02:10.925 + git -C dpdk log --oneline -n5 00:02:10.926 64f27886b8 eal/unix: optimize thread creation 00:02:10.926 c79900e31e app/dumpcap: check return value from adding interface 00:02:10.926 171360df9f net/mlx5: show incomplete records in Tx trace script 00:02:10.926 02932480ae net/mlx5: fix Tx tracing to use single clock source 00:02:10.926 27918f0d53 net/mlx5: fix real time counter reading from PCI BAR 00:02:10.945 [Pipeline] writeFile 00:02:10.960 [Pipeline] sh 00:02:11.244 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:11.258 [Pipeline] sh 00:02:11.542 + cat autorun-spdk.conf 00:02:11.542 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.542 SPDK_TEST_NVME=1 00:02:11.542 SPDK_TEST_FTL=1 00:02:11.542 SPDK_TEST_ISAL=1 00:02:11.542 SPDK_RUN_ASAN=1 00:02:11.542 SPDK_RUN_UBSAN=1 00:02:11.542 SPDK_TEST_XNVME=1 00:02:11.542 SPDK_TEST_NVME_FDP=1 00:02:11.542 SPDK_TEST_NATIVE_DPDK=main 00:02:11.542 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:11.542 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.551 RUN_NIGHTLY=1 00:02:11.553 [Pipeline] } 00:02:11.566 [Pipeline] // stage 00:02:11.580 [Pipeline] stage 00:02:11.583 [Pipeline] { (Run VM) 00:02:11.597 [Pipeline] sh 00:02:11.880 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:11.880 + echo 'Start stage prepare_nvme.sh' 00:02:11.880 Start stage prepare_nvme.sh 00:02:11.880 + [[ -n 6 ]] 00:02:11.880 + disk_prefix=ex6 00:02:11.880 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:11.880 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:11.880 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:11.880 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.880 ++ SPDK_TEST_NVME=1 00:02:11.880 ++ SPDK_TEST_FTL=1 00:02:11.880 ++ SPDK_TEST_ISAL=1 00:02:11.880 ++ SPDK_RUN_ASAN=1 00:02:11.880 ++ SPDK_RUN_UBSAN=1 00:02:11.880 ++ SPDK_TEST_XNVME=1 00:02:11.880 ++ SPDK_TEST_NVME_FDP=1 00:02:11.880 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:11.880 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:11.880 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.880 ++ RUN_NIGHTLY=1 00:02:11.880 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:11.880 + nvme_files=() 00:02:11.880 + declare -A nvme_files 00:02:11.880 + backend_dir=/var/lib/libvirt/images/backends 00:02:11.880 + nvme_files['nvme.img']=5G 00:02:11.880 + nvme_files['nvme-cmb.img']=5G 00:02:11.880 + nvme_files['nvme-multi0.img']=4G 00:02:11.880 + nvme_files['nvme-multi1.img']=4G 00:02:11.880 + nvme_files['nvme-multi2.img']=4G 00:02:11.880 + nvme_files['nvme-openstack.img']=8G 00:02:11.880 + nvme_files['nvme-zns.img']=5G 00:02:11.880 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:11.880 + (( SPDK_TEST_FTL == 1 )) 00:02:11.880 + nvme_files["nvme-ftl.img"]=6G 00:02:11.880 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:11.880 + nvme_files["nvme-fdp.img"]=1G 00:02:11.880 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:11.880 + for nvme in "${!nvme_files[@]}" 00:02:11.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:02:11.880 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.880 + for nvme in "${!nvme_files[@]}" 00:02:11.880 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:02:11.880 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:12.141 + for nvme in "${!nvme_files[@]}" 00:02:12.141 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:02:12.141 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:12.141 + for nvme in "${!nvme_files[@]}" 00:02:12.141 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:02:12.141 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:12.141 + for nvme in "${!nvme_files[@]}" 00:02:12.141 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:02:12.712 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:12.712 + for nvme in "${!nvme_files[@]}" 00:02:12.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:02:12.712 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:12.712 + for nvme in "${!nvme_files[@]}" 00:02:12.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:02:12.712 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:12.712 + for nvme in "${!nvme_files[@]}" 00:02:12.712 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:02:12.972 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:12.972 + for nvme in "${!nvme_files[@]}" 00:02:12.972 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:02:13.543 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:13.543 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:02:13.543 + echo 'End stage prepare_nvme.sh' 00:02:13.543 End stage prepare_nvme.sh 00:02:13.556 [Pipeline] sh 00:02:13.838 + DISTRO=fedora39 00:02:13.838 + CPUS=10 00:02:13.838 + RAM=12288 00:02:13.838 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:13.838 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:13.838 00:02:13.838 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:13.838 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:13.838 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:13.838 HELP=0 00:02:13.838 DRY_RUN=0 00:02:13.838 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:02:13.838 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:13.838 NVME_AUTO_CREATE=0 00:02:13.838 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:02:13.838 NVME_CMB=,,,, 00:02:13.838 NVME_PMR=,,,, 00:02:13.838 NVME_ZNS=,,,, 00:02:13.838 NVME_MS=true,,,, 00:02:13.838 NVME_FDP=,,,on, 00:02:13.838 SPDK_VAGRANT_DISTRO=fedora39 00:02:13.838 SPDK_VAGRANT_VMCPU=10 00:02:13.838 SPDK_VAGRANT_VMRAM=12288 00:02:13.838 SPDK_VAGRANT_PROVIDER=libvirt 00:02:13.838 SPDK_VAGRANT_HTTP_PROXY= 00:02:13.838 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:13.838 SPDK_OPENSTACK_NETWORK=0 00:02:13.838 VAGRANT_PACKAGE_BOX=0 00:02:13.838 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:13.838 FORCE_DISTRO=true 00:02:13.838 VAGRANT_BOX_VERSION= 00:02:13.838 EXTRA_VAGRANTFILES= 00:02:13.838 NIC_MODEL=e1000 00:02:13.838 00:02:13.838 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:13.838 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:16.421 Bringing machine 'default' up with 'libvirt' provider... 00:02:16.682 ==> default: Creating image (snapshot of base box volume). 00:02:16.943 ==> default: Creating domain with the following settings... 00:02:16.943 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730828196_9bd21b97bbe5bfd48324 00:02:16.943 ==> default: -- Domain type: kvm 00:02:16.943 ==> default: -- Cpus: 10 00:02:16.943 ==> default: -- Feature: acpi 00:02:16.943 ==> default: -- Feature: apic 00:02:16.943 ==> default: -- Feature: pae 00:02:16.943 ==> default: -- Memory: 12288M 00:02:16.943 ==> default: -- Memory Backing: hugepages: 00:02:16.943 ==> default: -- Management MAC: 00:02:16.943 ==> default: -- Loader: 00:02:16.943 ==> default: -- Nvram: 00:02:16.943 ==> default: -- Base box: spdk/fedora39 00:02:16.943 ==> default: -- Storage pool: default 00:02:16.943 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730828196_9bd21b97bbe5bfd48324.img (20G) 00:02:16.943 ==> default: -- Volume Cache: default 00:02:16.943 ==> default: -- Kernel: 00:02:16.943 ==> default: -- Initrd: 00:02:16.943 ==> default: -- Graphics Type: vnc 00:02:16.943 ==> default: -- Graphics Port: -1 00:02:16.943 ==> default: -- Graphics IP: 127.0.0.1 00:02:16.943 ==> default: -- Graphics Password: Not defined 00:02:16.943 ==> default: -- Video Type: cirrus 00:02:16.943 ==> default: -- Video VRAM: 9216 00:02:16.943 ==> default: -- Sound Type: 00:02:16.943 ==> default: -- Keymap: en-us 00:02:16.943 ==> default: -- TPM Path: 00:02:16.943 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:16.943 ==> default: -- Command line args: 00:02:16.943 ==> default: -> value=-device, 00:02:16.943 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:16.943 ==> default: -> value=-drive, 00:02:16.943 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:16.943 ==> default: -> value=-device, 00:02:16.943 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:16.943 ==> default: -> value=-device, 00:02:16.943 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:16.943 ==> default: -> value=-drive, 00:02:16.943 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:02:16.943 ==> default: -> value=-device, 00:02:16.943 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.943 ==> default: -> value=-device, 00:02:16.943 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:16.943 ==> default: -> value=-drive, 00:02:16.943 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:16.943 ==> default: -> value=-device, 00:02:16.943 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.943 ==> default: -> value=-drive, 00:02:16.943 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:16.943 ==> default: -> value=-device, 00:02:16.943 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.943 ==> default: -> value=-drive, 00:02:16.943 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:16.943 ==> default: -> value=-device, 00:02:16.943 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.943 ==> default: -> value=-device, 00:02:16.943 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:16.943 ==> default: -> value=-device, 00:02:16.943 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:16.943 ==> default: -> value=-drive, 00:02:16.943 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:16.943 ==> default: -> value=-device, 00:02:16.943 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.943 ==> default: Creating shared folders metadata... 00:02:16.943 ==> default: Starting domain. 00:02:18.859 ==> default: Waiting for domain to get an IP address... 00:02:36.968 ==> default: Waiting for SSH to become available... 00:02:36.968 ==> default: Configuring and enabling network interfaces... 00:02:39.501 default: SSH address: 192.168.121.103:22 00:02:39.501 default: SSH username: vagrant 00:02:39.501 default: SSH auth method: private key 00:02:41.459 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:48.013 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:53.319 ==> default: Mounting SSHFS shared folder... 00:02:55.225 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:55.225 ==> default: Checking Mount.. 00:02:56.646 ==> default: Folder Successfully Mounted! 00:02:56.646 00:02:56.646 SUCCESS! 00:02:56.646 00:02:56.646 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:56.646 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:56.646 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:56.646 00:02:56.655 [Pipeline] } 00:02:56.670 [Pipeline] // stage 00:02:56.679 [Pipeline] dir 00:02:56.680 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:56.682 [Pipeline] { 00:02:56.694 [Pipeline] catchError 00:02:56.696 [Pipeline] { 00:02:56.709 [Pipeline] sh 00:02:56.994 + vagrant ssh-config --host vagrant 00:02:56.994 + sed -ne '/^Host/,$p' 00:02:56.994 + tee ssh_conf 00:03:00.295 Host vagrant 00:03:00.295 HostName 192.168.121.103 00:03:00.295 User vagrant 00:03:00.295 Port 22 00:03:00.295 UserKnownHostsFile /dev/null 00:03:00.295 StrictHostKeyChecking no 00:03:00.295 PasswordAuthentication no 00:03:00.295 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:00.296 IdentitiesOnly yes 00:03:00.296 LogLevel FATAL 00:03:00.296 ForwardAgent yes 00:03:00.296 ForwardX11 yes 00:03:00.296 00:03:00.311 [Pipeline] withEnv 00:03:00.314 [Pipeline] { 00:03:00.326 [Pipeline] sh 00:03:00.611 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:03:00.611 source /etc/os-release 00:03:00.611 [[ -e /image.version ]] && img=$(< /image.version) 00:03:00.611 # Minimal, systemd-like check. 00:03:00.611 if [[ -e /.dockerenv ]]; then 00:03:00.611 # Clear garbage from the node'\''s name: 00:03:00.611 # agt-er_autotest_547-896 -> autotest_547-896 00:03:00.611 # $HOSTNAME is the actual container id 00:03:00.611 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:00.611 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:00.611 # We can assume this is a mount from a host where container is running, 00:03:00.611 # so fetch its hostname to easily identify the target swarm worker. 00:03:00.611 container="$(< /etc/hostname) ($agent)" 00:03:00.611 else 00:03:00.611 # Fallback 00:03:00.611 container=$agent 00:03:00.611 fi 00:03:00.611 fi 00:03:00.611 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:00.611 ' 00:03:00.885 [Pipeline] } 00:03:00.900 [Pipeline] // withEnv 00:03:00.908 [Pipeline] setCustomBuildProperty 00:03:00.922 [Pipeline] stage 00:03:00.925 [Pipeline] { (Tests) 00:03:00.940 [Pipeline] sh 00:03:01.227 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:01.504 [Pipeline] sh 00:03:01.789 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:02.067 [Pipeline] timeout 00:03:02.067 Timeout set to expire in 50 min 00:03:02.069 [Pipeline] { 00:03:02.082 [Pipeline] sh 00:03:02.368 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:03:02.941 HEAD is now at f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:03:02.956 [Pipeline] sh 00:03:03.270 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:03:03.547 [Pipeline] sh 00:03:03.833 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:04.113 [Pipeline] sh 00:03:04.399 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:04.660 ++ readlink -f spdk_repo 00:03:04.660 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:04.660 + [[ -n /home/vagrant/spdk_repo ]] 00:03:04.660 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:04.660 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:04.660 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:04.660 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:04.660 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:04.660 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:04.660 + cd /home/vagrant/spdk_repo 00:03:04.660 + source /etc/os-release 00:03:04.660 ++ NAME='Fedora Linux' 00:03:04.660 ++ VERSION='39 (Cloud Edition)' 00:03:04.660 ++ ID=fedora 00:03:04.660 ++ VERSION_ID=39 00:03:04.660 ++ VERSION_CODENAME= 00:03:04.660 ++ PLATFORM_ID=platform:f39 00:03:04.660 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:04.660 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:04.660 ++ LOGO=fedora-logo-icon 00:03:04.660 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:04.660 ++ HOME_URL=https://fedoraproject.org/ 00:03:04.660 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:04.660 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:04.660 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:04.660 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:04.660 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:04.660 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:04.660 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:04.660 ++ SUPPORT_END=2024-11-12 00:03:04.660 ++ VARIANT='Cloud Edition' 00:03:04.660 ++ VARIANT_ID=cloud 00:03:04.660 + uname -a 00:03:04.660 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:04.660 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:04.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:05.181 Hugepages 00:03:05.181 node hugesize free / total 00:03:05.181 node0 1048576kB 0 / 0 00:03:05.442 node0 2048kB 0 / 0 00:03:05.442 00:03:05.442 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:05.442 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:05.442 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:05.442 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:05.442 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:05.442 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:05.442 + rm -f /tmp/spdk-ld-path 00:03:05.442 + source autorun-spdk.conf 00:03:05.442 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:05.442 ++ SPDK_TEST_NVME=1 00:03:05.442 ++ SPDK_TEST_FTL=1 00:03:05.442 ++ SPDK_TEST_ISAL=1 00:03:05.442 ++ SPDK_RUN_ASAN=1 00:03:05.442 ++ SPDK_RUN_UBSAN=1 00:03:05.442 ++ SPDK_TEST_XNVME=1 00:03:05.442 ++ SPDK_TEST_NVME_FDP=1 00:03:05.442 ++ SPDK_TEST_NATIVE_DPDK=main 00:03:05.442 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:05.442 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:05.442 ++ RUN_NIGHTLY=1 00:03:05.442 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:05.442 + [[ -n '' ]] 00:03:05.442 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:05.442 + for M in /var/spdk/build-*-manifest.txt 00:03:05.442 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:05.442 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:05.442 + for M in /var/spdk/build-*-manifest.txt 00:03:05.442 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:05.442 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:05.442 + for M in /var/spdk/build-*-manifest.txt 00:03:05.442 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:05.443 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:05.443 ++ uname 00:03:05.443 + [[ Linux == \L\i\n\u\x ]] 00:03:05.443 + sudo dmesg -T 00:03:05.443 + sudo dmesg --clear 00:03:05.443 + dmesg_pid=5767 00:03:05.443 + [[ Fedora Linux == FreeBSD ]] 00:03:05.443 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:05.443 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:05.443 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:05.443 + [[ -x /usr/src/fio-static/fio ]] 00:03:05.443 + sudo dmesg -Tw 00:03:05.443 + export FIO_BIN=/usr/src/fio-static/fio 00:03:05.443 + FIO_BIN=/usr/src/fio-static/fio 00:03:05.443 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:05.443 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:05.443 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:05.443 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:05.443 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:05.443 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:05.443 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:05.443 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:05.443 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:05.705 17:37:25 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:05.705 17:37:25 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=main 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:05.705 17:37:25 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:03:05.705 17:37:25 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:05.705 17:37:25 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:05.705 17:37:25 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:05.705 17:37:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:05.705 17:37:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:05.705 17:37:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:05.705 17:37:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.705 17:37:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.705 17:37:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.705 17:37:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.705 17:37:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.705 17:37:25 -- paths/export.sh@5 -- $ export PATH 00:03:05.705 17:37:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.705 17:37:25 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:05.705 17:37:25 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:05.705 17:37:25 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730828245.XXXXXX 00:03:05.705 17:37:25 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730828245.9xC3qZ 00:03:05.705 17:37:25 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:05.705 17:37:25 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:03:05.705 17:37:25 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:05.705 17:37:25 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:05.705 17:37:25 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:05.705 17:37:25 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:05.705 17:37:25 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:05.705 17:37:25 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:05.705 17:37:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.705 17:37:25 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:03:05.705 17:37:25 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:05.705 17:37:25 -- pm/common@17 -- $ local monitor 00:03:05.705 17:37:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.705 17:37:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.705 17:37:25 -- pm/common@25 -- $ sleep 1 00:03:05.705 17:37:25 -- pm/common@21 -- $ date +%s 00:03:05.705 17:37:25 -- pm/common@21 -- $ date +%s 00:03:05.705 17:37:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730828245 00:03:05.705 17:37:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730828245 00:03:05.705 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730828245_collect-cpu-load.pm.log 00:03:05.705 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730828245_collect-vmstat.pm.log 00:03:06.650 17:37:26 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:06.650 17:37:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:06.650 17:37:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:06.650 17:37:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:06.650 17:37:26 -- spdk/autobuild.sh@16 -- $ date -u 00:03:06.650 Tue Nov 5 05:37:26 PM UTC 2024 00:03:06.650 17:37:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:06.650 v25.01-pre-158-gf220d590c 00:03:06.650 17:37:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:06.650 17:37:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:06.650 17:37:26 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:06.650 17:37:26 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:06.650 17:37:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:06.650 ************************************ 00:03:06.650 START TEST asan 00:03:06.650 ************************************ 00:03:06.650 using asan 00:03:06.650 17:37:26 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:03:06.650 00:03:06.650 real 0m0.000s 00:03:06.650 user 0m0.000s 00:03:06.650 sys 0m0.000s 00:03:06.650 17:37:26 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:06.650 ************************************ 00:03:06.650 END TEST asan 00:03:06.650 ************************************ 00:03:06.650 17:37:26 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:06.913 17:37:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:06.913 17:37:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:06.913 17:37:26 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:06.913 17:37:26 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:06.913 17:37:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:06.913 ************************************ 00:03:06.913 START TEST ubsan 00:03:06.913 ************************************ 00:03:06.913 using ubsan 00:03:06.913 17:37:26 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:03:06.913 00:03:06.913 real 0m0.000s 00:03:06.913 user 0m0.000s 00:03:06.913 sys 0m0.000s 00:03:06.913 ************************************ 00:03:06.913 END TEST ubsan 00:03:06.913 ************************************ 00:03:06.913 17:37:26 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:06.913 17:37:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:06.913 17:37:26 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:03:06.913 17:37:26 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:06.913 17:37:26 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:06.913 17:37:26 -- common/autotest_common.sh@1103 -- $ '[' 2 -le 1 ']' 00:03:06.913 17:37:26 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:06.913 17:37:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:06.913 ************************************ 00:03:06.913 START TEST build_native_dpdk 00:03:06.913 ************************************ 00:03:06.913 17:37:26 build_native_dpdk -- common/autotest_common.sh@1127 -- $ _build_native_dpdk 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:06.913 64f27886b8 eal/unix: optimize thread creation 00:03:06.913 c79900e31e app/dumpcap: check return value from adding interface 00:03:06.913 171360df9f net/mlx5: show incomplete records in Tx trace script 00:03:06.913 02932480ae net/mlx5: fix Tx tracing to use single clock source 00:03:06.913 27918f0d53 net/mlx5: fix real time counter reading from PCI BAR 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc1 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc1 21.11.0 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc1 '<' 21.11.0 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:06.913 patching file config/rte_config.h 00:03:06.913 Hunk #1 succeeded at 71 (offset 12 lines). 00:03:06.913 17:37:26 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc1 24.07.0 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc1 '<' 24.07.0 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:03:06.913 17:37:26 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:06.914 17:37:26 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc1 24.07.0 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc1 '>=' 24.07.0 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:06.914 17:37:26 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:03:06.914 17:37:26 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:03:06.914 patching file drivers/bus/pci/linux/pci_uio.c 00:03:06.914 17:37:26 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:06.914 17:37:26 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:06.914 17:37:26 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:06.914 17:37:26 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:06.914 17:37:26 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:12.241 The Meson build system 00:03:12.241 Version: 1.5.0 00:03:12.241 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:12.241 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:12.241 Build type: native build 00:03:12.241 Project name: DPDK 00:03:12.241 Project version: 24.11.0-rc1 00:03:12.241 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:12.241 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:12.241 Host machine cpu family: x86_64 00:03:12.241 Host machine cpu: x86_64 00:03:12.241 Message: ## Building in Developer Mode ## 00:03:12.241 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:12.241 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:12.241 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:12.241 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:03:12.241 Program cat found: YES (/usr/bin/cat) 00:03:12.241 config/meson.build:119: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:12.241 Compiler for C supports arguments -march=native: YES 00:03:12.241 Checking for size of "void *" : 8 00:03:12.241 Checking for size of "void *" : 8 (cached) 00:03:12.241 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:12.241 Library m found: YES 00:03:12.241 Library numa found: YES 00:03:12.241 Has header "numaif.h" : YES 00:03:12.241 Library fdt found: NO 00:03:12.241 Library execinfo found: NO 00:03:12.241 Has header "execinfo.h" : YES 00:03:12.241 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:12.241 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:12.241 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:12.241 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:12.241 Run-time dependency openssl found: YES 3.1.1 00:03:12.241 Run-time dependency libpcap found: YES 1.10.4 00:03:12.241 Has header "pcap.h" with dependency libpcap: YES 00:03:12.241 Compiler for C supports arguments -Wcast-qual: YES 00:03:12.241 Compiler for C supports arguments -Wdeprecated: YES 00:03:12.241 Compiler for C supports arguments -Wformat: YES 00:03:12.241 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:12.241 Compiler for C supports arguments -Wformat-security: NO 00:03:12.241 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:12.241 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:12.241 Compiler for C supports arguments -Wnested-externs: YES 00:03:12.241 Compiler for C supports arguments -Wold-style-definition: YES 00:03:12.241 Compiler for C supports arguments -Wpointer-arith: YES 00:03:12.241 Compiler for C supports arguments -Wsign-compare: YES 00:03:12.241 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:12.241 Compiler for C supports arguments -Wundef: YES 00:03:12.241 Compiler for C supports arguments -Wwrite-strings: YES 00:03:12.241 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:12.241 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:12.241 Program objdump found: YES (/usr/bin/objdump) 00:03:12.241 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:03:12.241 Checking if "AVX512 checking" compiles: YES 00:03:12.241 Fetching value of define "__AVX512F__" : 1 00:03:12.241 Fetching value of define "__AVX512BW__" : 1 00:03:12.241 Fetching value of define "__AVX512DQ__" : 1 00:03:12.241 Fetching value of define "__AVX512VL__" : 1 00:03:12.241 Fetching value of define "__SSE4_2__" : 1 00:03:12.241 Fetching value of define "__AES__" : 1 00:03:12.241 Fetching value of define "__AVX__" : 1 00:03:12.241 Fetching value of define "__AVX2__" : 1 00:03:12.241 Fetching value of define "__AVX512BW__" : 1 00:03:12.242 Fetching value of define "__AVX512CD__" : 1 00:03:12.242 Fetching value of define "__AVX512DQ__" : 1 00:03:12.242 Fetching value of define "__AVX512F__" : 1 00:03:12.242 Fetching value of define "__AVX512VL__" : 1 00:03:12.242 Fetching value of define "__PCLMUL__" : 1 00:03:12.242 Fetching value of define "__RDRND__" : 1 00:03:12.242 Fetching value of define "__RDSEED__" : 1 00:03:12.242 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:12.242 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:12.242 Message: lib/log: Defining dependency "log" 00:03:12.242 Message: lib/kvargs: Defining dependency "kvargs" 00:03:12.242 Message: lib/argparse: Defining dependency "argparse" 00:03:12.242 Message: lib/telemetry: Defining dependency "telemetry" 00:03:12.242 Checking for function "pthread_attr_setaffinity_np" : YES 00:03:12.242 Checking for function "getentropy" : NO 00:03:12.242 Message: lib/eal: Defining dependency "eal" 00:03:12.242 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:03:12.242 Message: lib/ring: Defining dependency "ring" 00:03:12.242 Message: lib/rcu: Defining dependency "rcu" 00:03:12.242 Message: lib/mempool: Defining dependency "mempool" 00:03:12.242 Message: lib/mbuf: Defining dependency "mbuf" 00:03:12.242 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:12.242 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:12.242 Compiler for C supports arguments -mpclmul: YES 00:03:12.242 Compiler for C supports arguments -maes: YES 00:03:12.242 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:12.242 Message: lib/net: Defining dependency "net" 00:03:12.242 Message: lib/meter: Defining dependency "meter" 00:03:12.242 Message: lib/ethdev: Defining dependency "ethdev" 00:03:12.242 Message: lib/pci: Defining dependency "pci" 00:03:12.242 Message: lib/cmdline: Defining dependency "cmdline" 00:03:12.242 Message: lib/metrics: Defining dependency "metrics" 00:03:12.242 Message: lib/hash: Defining dependency "hash" 00:03:12.242 Message: lib/timer: Defining dependency "timer" 00:03:12.242 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:12.242 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:12.242 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:12.242 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:12.242 Message: lib/acl: Defining dependency "acl" 00:03:12.242 Message: lib/bbdev: Defining dependency "bbdev" 00:03:12.242 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:12.242 Run-time dependency libelf found: YES 0.191 00:03:12.242 Message: lib/bpf: Defining dependency "bpf" 00:03:12.242 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:12.242 Message: lib/compressdev: Defining dependency "compressdev" 00:03:12.242 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:12.242 Message: lib/distributor: Defining dependency "distributor" 00:03:12.242 Message: lib/dmadev: Defining dependency "dmadev" 00:03:12.242 Message: lib/efd: Defining dependency "efd" 00:03:12.242 Message: lib/eventdev: Defining dependency "eventdev" 00:03:12.242 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:12.242 Message: lib/gpudev: Defining dependency "gpudev" 00:03:12.242 Message: lib/gro: Defining dependency "gro" 00:03:12.242 Message: lib/gso: Defining dependency "gso" 00:03:12.242 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:12.242 Message: lib/jobstats: Defining dependency "jobstats" 00:03:12.242 Message: lib/latencystats: Defining dependency "latencystats" 00:03:12.242 Message: lib/lpm: Defining dependency "lpm" 00:03:12.242 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:12.242 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:12.242 Fetching value of define "__AVX512IFMA__" : 1 00:03:12.242 Message: lib/member: Defining dependency "member" 00:03:12.242 Message: lib/pcapng: Defining dependency "pcapng" 00:03:12.242 Message: lib/power: Defining dependency "power" 00:03:12.242 Message: lib/rawdev: Defining dependency "rawdev" 00:03:12.242 Message: lib/regexdev: Defining dependency "regexdev" 00:03:12.242 Message: lib/mldev: Defining dependency "mldev" 00:03:12.242 Message: lib/rib: Defining dependency "rib" 00:03:12.242 Message: lib/reorder: Defining dependency "reorder" 00:03:12.242 Message: lib/sched: Defining dependency "sched" 00:03:12.242 Message: lib/security: Defining dependency "security" 00:03:12.242 Message: lib/stack: Defining dependency "stack" 00:03:12.242 Has header "linux/userfaultfd.h" : YES 00:03:12.242 Message: lib/vhost: Defining dependency "vhost" 00:03:12.242 Message: lib/ipsec: Defining dependency "ipsec" 00:03:12.242 Message: lib/pdcp: Defining dependency "pdcp" 00:03:12.242 Message: lib/fib: Defining dependency "fib" 00:03:12.242 Message: lib/port: Defining dependency "port" 00:03:12.242 Message: lib/pdump: Defining dependency "pdump" 00:03:12.242 Message: lib/table: Defining dependency "table" 00:03:12.242 Message: lib/pipeline: Defining dependency "pipeline" 00:03:12.242 Message: lib/graph: Defining dependency "graph" 00:03:12.242 Message: lib/node: Defining dependency "node" 00:03:12.242 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:12.242 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:12.242 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:12.242 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:12.242 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:12.242 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:12.242 Compiler for C supports arguments -Wno-unused-value: YES 00:03:12.242 Compiler for C supports arguments -Wno-format: YES 00:03:12.242 Compiler for C supports arguments -Wno-format-security: YES 00:03:12.242 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:12.242 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:12.242 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:12.242 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:12.242 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:12.242 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:12.504 Has header "sys/epoll.h" : YES 00:03:12.504 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:12.504 Configuring doxy-api-html.conf using configuration 00:03:12.504 doc/api/meson.build:54: WARNING: The variable(s) 'DTS_API_MAIN_PAGE' in the input file 'doc/api/doxy-api.conf.in' are not present in the given configuration data. 00:03:12.504 Configuring doxy-api-man.conf using configuration 00:03:12.504 doc/api/meson.build:67: WARNING: The variable(s) 'DTS_API_MAIN_PAGE' in the input file 'doc/api/doxy-api.conf.in' are not present in the given configuration data. 00:03:12.504 Program mandb found: YES (/usr/bin/mandb) 00:03:12.504 Program sphinx-build found: NO 00:03:12.504 Program sphinx-build found: NO 00:03:12.504 Configuring rte_build_config.h using configuration 00:03:12.504 Message: 00:03:12.504 ================= 00:03:12.504 Applications Enabled 00:03:12.504 ================= 00:03:12.504 00:03:12.504 apps: 00:03:12.504 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:12.504 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:12.504 test-pmd, test-regex, test-sad, test-security-perf, 00:03:12.504 00:03:12.504 Message: 00:03:12.504 ================= 00:03:12.504 Libraries Enabled 00:03:12.504 ================= 00:03:12.504 00:03:12.504 libs: 00:03:12.504 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:03:12.504 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:03:12.504 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:03:12.504 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:03:12.504 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:03:12.504 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:03:12.504 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:03:12.504 graph, node, 00:03:12.504 00:03:12.504 Message: 00:03:12.504 =============== 00:03:12.504 Drivers Enabled 00:03:12.504 =============== 00:03:12.504 00:03:12.504 common: 00:03:12.504 00:03:12.504 bus: 00:03:12.504 pci, vdev, 00:03:12.504 mempool: 00:03:12.504 ring, 00:03:12.504 dma: 00:03:12.504 00:03:12.504 net: 00:03:12.504 i40e, 00:03:12.504 raw: 00:03:12.504 00:03:12.504 crypto: 00:03:12.504 00:03:12.504 compress: 00:03:12.504 00:03:12.504 regex: 00:03:12.504 00:03:12.504 ml: 00:03:12.504 00:03:12.504 vdpa: 00:03:12.504 00:03:12.504 event: 00:03:12.504 00:03:12.504 baseband: 00:03:12.504 00:03:12.504 gpu: 00:03:12.504 00:03:12.504 00:03:12.504 Message: 00:03:12.504 ================= 00:03:12.504 Content Skipped 00:03:12.504 ================= 00:03:12.504 00:03:12.504 apps: 00:03:12.504 00:03:12.504 libs: 00:03:12.504 00:03:12.504 drivers: 00:03:12.504 common/cpt: not in enabled drivers build config 00:03:12.504 common/dpaax: not in enabled drivers build config 00:03:12.504 common/iavf: not in enabled drivers build config 00:03:12.504 common/idpf: not in enabled drivers build config 00:03:12.504 common/ionic: not in enabled drivers build config 00:03:12.504 common/mvep: not in enabled drivers build config 00:03:12.504 common/octeontx: not in enabled drivers build config 00:03:12.504 bus/auxiliary: not in enabled drivers build config 00:03:12.504 bus/cdx: not in enabled drivers build config 00:03:12.504 bus/dpaa: not in enabled drivers build config 00:03:12.504 bus/fslmc: not in enabled drivers build config 00:03:12.504 bus/ifpga: not in enabled drivers build config 00:03:12.504 bus/platform: not in enabled drivers build config 00:03:12.504 bus/uacce: not in enabled drivers build config 00:03:12.504 bus/vmbus: not in enabled drivers build config 00:03:12.504 common/cnxk: not in enabled drivers build config 00:03:12.504 common/mlx5: not in enabled drivers build config 00:03:12.504 common/nfp: not in enabled drivers build config 00:03:12.504 common/nitrox: not in enabled drivers build config 00:03:12.504 common/qat: not in enabled drivers build config 00:03:12.504 common/sfc_efx: not in enabled drivers build config 00:03:12.504 mempool/bucket: not in enabled drivers build config 00:03:12.504 mempool/cnxk: not in enabled drivers build config 00:03:12.504 mempool/dpaa: not in enabled drivers build config 00:03:12.504 mempool/dpaa2: not in enabled drivers build config 00:03:12.504 mempool/octeontx: not in enabled drivers build config 00:03:12.504 mempool/stack: not in enabled drivers build config 00:03:12.504 dma/cnxk: not in enabled drivers build config 00:03:12.504 dma/dpaa: not in enabled drivers build config 00:03:12.504 dma/dpaa2: not in enabled drivers build config 00:03:12.504 dma/hisilicon: not in enabled drivers build config 00:03:12.504 dma/idxd: not in enabled drivers build config 00:03:12.504 dma/ioat: not in enabled drivers build config 00:03:12.504 dma/odm: not in enabled drivers build config 00:03:12.504 dma/skeleton: not in enabled drivers build config 00:03:12.504 net/af_packet: not in enabled drivers build config 00:03:12.504 net/af_xdp: not in enabled drivers build config 00:03:12.504 net/ark: not in enabled drivers build config 00:03:12.504 net/atlantic: not in enabled drivers build config 00:03:12.504 net/avp: not in enabled drivers build config 00:03:12.504 net/axgbe: not in enabled drivers build config 00:03:12.504 net/bnx2x: not in enabled drivers build config 00:03:12.504 net/bnxt: not in enabled drivers build config 00:03:12.504 net/bonding: not in enabled drivers build config 00:03:12.504 net/cnxk: not in enabled drivers build config 00:03:12.504 net/cpfl: not in enabled drivers build config 00:03:12.504 net/cxgbe: not in enabled drivers build config 00:03:12.504 net/dpaa: not in enabled drivers build config 00:03:12.504 net/dpaa2: not in enabled drivers build config 00:03:12.504 net/e1000: not in enabled drivers build config 00:03:12.504 net/ena: not in enabled drivers build config 00:03:12.504 net/enetc: not in enabled drivers build config 00:03:12.504 net/enetfec: not in enabled drivers build config 00:03:12.504 net/enic: not in enabled drivers build config 00:03:12.504 net/failsafe: not in enabled drivers build config 00:03:12.504 net/fm10k: not in enabled drivers build config 00:03:12.504 net/gve: not in enabled drivers build config 00:03:12.504 net/hinic: not in enabled drivers build config 00:03:12.504 net/hns3: not in enabled drivers build config 00:03:12.504 net/iavf: not in enabled drivers build config 00:03:12.504 net/ice: not in enabled drivers build config 00:03:12.504 net/idpf: not in enabled drivers build config 00:03:12.504 net/igc: not in enabled drivers build config 00:03:12.504 net/ionic: not in enabled drivers build config 00:03:12.504 net/ipn3ke: not in enabled drivers build config 00:03:12.504 net/ixgbe: not in enabled drivers build config 00:03:12.504 net/mana: not in enabled drivers build config 00:03:12.504 net/memif: not in enabled drivers build config 00:03:12.504 net/mlx4: not in enabled drivers build config 00:03:12.504 net/mlx5: not in enabled drivers build config 00:03:12.504 net/mvneta: not in enabled drivers build config 00:03:12.504 net/mvpp2: not in enabled drivers build config 00:03:12.504 net/netvsc: not in enabled drivers build config 00:03:12.504 net/nfb: not in enabled drivers build config 00:03:12.504 net/nfp: not in enabled drivers build config 00:03:12.504 net/ngbe: not in enabled drivers build config 00:03:12.504 net/ntnic: not in enabled drivers build config 00:03:12.504 net/null: not in enabled drivers build config 00:03:12.504 net/octeontx: not in enabled drivers build config 00:03:12.504 net/octeon_ep: not in enabled drivers build config 00:03:12.504 net/pcap: not in enabled drivers build config 00:03:12.504 net/pfe: not in enabled drivers build config 00:03:12.505 net/qede: not in enabled drivers build config 00:03:12.505 net/ring: not in enabled drivers build config 00:03:12.505 net/sfc: not in enabled drivers build config 00:03:12.505 net/softnic: not in enabled drivers build config 00:03:12.505 net/tap: not in enabled drivers build config 00:03:12.505 net/thunderx: not in enabled drivers build config 00:03:12.505 net/txgbe: not in enabled drivers build config 00:03:12.505 net/vdev_netvsc: not in enabled drivers build config 00:03:12.505 net/vhost: not in enabled drivers build config 00:03:12.505 net/virtio: not in enabled drivers build config 00:03:12.505 net/vmxnet3: not in enabled drivers build config 00:03:12.505 raw/cnxk_bphy: not in enabled drivers build config 00:03:12.505 raw/cnxk_gpio: not in enabled drivers build config 00:03:12.505 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:12.505 raw/ifpga: not in enabled drivers build config 00:03:12.505 raw/ntb: not in enabled drivers build config 00:03:12.505 raw/skeleton: not in enabled drivers build config 00:03:12.505 crypto/armv8: not in enabled drivers build config 00:03:12.505 crypto/bcmfs: not in enabled drivers build config 00:03:12.505 crypto/caam_jr: not in enabled drivers build config 00:03:12.505 crypto/ccp: not in enabled drivers build config 00:03:12.505 crypto/cnxk: not in enabled drivers build config 00:03:12.505 crypto/dpaa_sec: not in enabled drivers build config 00:03:12.505 crypto/dpaa2_sec: not in enabled drivers build config 00:03:12.505 crypto/ionic: not in enabled drivers build config 00:03:12.505 crypto/ipsec_mb: not in enabled drivers build config 00:03:12.505 crypto/mlx5: not in enabled drivers build config 00:03:12.505 crypto/mvsam: not in enabled drivers build config 00:03:12.505 crypto/nitrox: not in enabled drivers build config 00:03:12.505 crypto/null: not in enabled drivers build config 00:03:12.505 crypto/octeontx: not in enabled drivers build config 00:03:12.505 crypto/openssl: not in enabled drivers build config 00:03:12.505 crypto/scheduler: not in enabled drivers build config 00:03:12.505 crypto/uadk: not in enabled drivers build config 00:03:12.505 crypto/virtio: not in enabled drivers build config 00:03:12.505 compress/isal: not in enabled drivers build config 00:03:12.505 compress/mlx5: not in enabled drivers build config 00:03:12.505 compress/nitrox: not in enabled drivers build config 00:03:12.505 compress/octeontx: not in enabled drivers build config 00:03:12.505 compress/uadk: not in enabled drivers build config 00:03:12.505 compress/zlib: not in enabled drivers build config 00:03:12.505 regex/mlx5: not in enabled drivers build config 00:03:12.505 regex/cn9k: not in enabled drivers build config 00:03:12.505 ml/cnxk: not in enabled drivers build config 00:03:12.505 vdpa/ifc: not in enabled drivers build config 00:03:12.505 vdpa/mlx5: not in enabled drivers build config 00:03:12.505 vdpa/nfp: not in enabled drivers build config 00:03:12.505 vdpa/sfc: not in enabled drivers build config 00:03:12.505 event/cnxk: not in enabled drivers build config 00:03:12.505 event/dlb2: not in enabled drivers build config 00:03:12.505 event/dpaa: not in enabled drivers build config 00:03:12.505 event/dpaa2: not in enabled drivers build config 00:03:12.505 event/dsw: not in enabled drivers build config 00:03:12.505 event/opdl: not in enabled drivers build config 00:03:12.505 event/skeleton: not in enabled drivers build config 00:03:12.505 event/sw: not in enabled drivers build config 00:03:12.505 event/octeontx: not in enabled drivers build config 00:03:12.505 baseband/acc: not in enabled drivers build config 00:03:12.505 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:12.505 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:12.505 baseband/la12xx: not in enabled drivers build config 00:03:12.505 baseband/null: not in enabled drivers build config 00:03:12.505 baseband/turbo_sw: not in enabled drivers build config 00:03:12.505 gpu/cuda: not in enabled drivers build config 00:03:12.505 00:03:12.505 00:03:12.505 Build targets in project: 219 00:03:12.505 00:03:12.505 DPDK 24.11.0-rc1 00:03:12.505 00:03:12.505 User defined options 00:03:12.505 libdir : lib 00:03:12.505 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:12.505 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:12.505 c_link_args : 00:03:12.505 enable_docs : false 00:03:12.505 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:12.505 enable_kmods : false 00:03:12.505 machine : native 00:03:12.505 tests : false 00:03:12.505 00:03:12.505 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:12.505 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:12.505 17:37:32 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:12.505 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:12.766 [1/719] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:12.766 [2/719] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:12.766 [3/719] Linking static target lib/librte_kvargs.a 00:03:12.766 [4/719] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:12.766 [5/719] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:12.766 [6/719] Linking static target lib/librte_log.a 00:03:12.766 [7/719] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:03:12.766 [8/719] Linking static target lib/librte_argparse.a 00:03:13.027 [9/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:13.027 [10/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:13.027 [11/719] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.027 [12/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:13.027 [13/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:13.027 [14/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:13.027 [15/719] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:13.027 [16/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:13.027 [17/719] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.287 [18/719] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.287 [19/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:13.287 [20/719] Linking target lib/librte_log.so.25.0 00:03:13.287 [21/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:13.287 [22/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:13.287 [23/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:13.548 [24/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:13.548 [25/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:13.548 [26/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:13.548 [27/719] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:03:13.548 [28/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:13.548 [29/719] Linking target lib/librte_kvargs.so.25.0 00:03:13.548 [30/719] Linking target lib/librte_argparse.so.25.0 00:03:13.548 [31/719] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:13.548 [32/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:13.548 [33/719] Linking static target lib/librte_telemetry.a 00:03:13.548 [34/719] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:03:13.806 [35/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:13.807 [36/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:13.807 [37/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:13.807 [38/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:13.807 [39/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:13.807 [40/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:14.064 [41/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:03:14.064 [42/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:14.064 [43/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:14.064 [44/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:14.064 [45/719] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.064 [46/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:14.064 [47/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:14.064 [48/719] Linking target lib/librte_telemetry.so.25.0 00:03:14.064 [49/719] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:03:14.064 [50/719] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:14.324 [51/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:14.324 [52/719] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:14.324 [53/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:14.324 [54/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:14.324 [55/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:14.324 [56/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:14.324 [57/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:14.585 [58/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:14.585 [59/719] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:14.585 [60/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:14.585 [61/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:14.585 [62/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:14.585 [63/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:14.585 [64/719] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:14.585 [65/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:14.843 [66/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:14.843 [67/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:14.843 [68/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:14.843 [69/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:14.843 [70/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:14.843 [71/719] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:14.843 [72/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:14.843 [73/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:15.102 [74/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:15.102 [75/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:15.102 [76/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:15.102 [77/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:15.102 [78/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:15.102 [79/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:15.102 [80/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:15.102 [81/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:15.102 [82/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:15.360 [83/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:15.360 [84/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:15.360 [85/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:15.360 [86/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:03:15.360 [87/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:15.360 [88/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:15.360 [89/719] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:15.360 [90/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:15.360 [91/719] Linking static target lib/librte_ring.a 00:03:15.618 [92/719] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:15.618 [93/719] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:15.618 [94/719] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:15.618 [95/719] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.618 [96/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:15.876 [97/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:15.876 [98/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:15.876 [99/719] Linking static target lib/librte_eal.a 00:03:15.876 [100/719] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:15.876 [101/719] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:15.876 [102/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:15.876 [103/719] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:15.876 [104/719] Linking static target lib/librte_rcu.a 00:03:16.133 [105/719] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:16.133 [106/719] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:16.134 [107/719] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:16.134 [108/719] Linking static target lib/librte_mempool.a 00:03:16.134 [109/719] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:16.134 [110/719] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.134 [111/719] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:16.134 [112/719] Linking static target lib/librte_meter.a 00:03:16.391 [113/719] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:16.391 [114/719] Linking static target lib/librte_net.a 00:03:16.391 [115/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:16.391 [116/719] Linking static target lib/librte_mbuf.a 00:03:16.391 [117/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:16.391 [118/719] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.391 [119/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:16.391 [120/719] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.391 [121/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:16.648 [122/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:16.648 [123/719] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.906 [124/719] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.906 [125/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:16.906 [126/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:17.163 [127/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:17.163 [128/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:17.163 [129/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:17.421 [130/719] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:17.421 [131/719] Linking static target lib/librte_pci.a 00:03:17.421 [132/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:17.421 [133/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:17.421 [134/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:17.421 [135/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:17.421 [136/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:17.421 [137/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:17.421 [138/719] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.679 [139/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:17.679 [140/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:17.679 [141/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:17.679 [142/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:17.679 [143/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:17.679 [144/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:17.679 [145/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:17.679 [146/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:17.679 [147/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:17.679 [148/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:17.679 [149/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:17.679 [150/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:17.679 [151/719] Linking static target lib/librte_cmdline.a 00:03:17.937 [152/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:17.937 [153/719] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:17.937 [154/719] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:17.937 [155/719] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:17.937 [156/719] Linking static target lib/librte_metrics.a 00:03:17.937 [157/719] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:18.195 [158/719] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:18.195 [159/719] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.453 [160/719] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:18.453 [161/719] Linking static target lib/librte_timer.a 00:03:18.453 [162/719] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.453 [163/719] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:18.453 [164/719] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:18.712 [165/719] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.712 [166/719] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:18.712 [167/719] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:18.968 [168/719] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:19.227 [169/719] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:19.227 [170/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:19.227 [171/719] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:19.227 [172/719] Linking static target lib/librte_bitratestats.a 00:03:19.525 [173/719] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:19.525 [174/719] Linking static target lib/librte_bbdev.a 00:03:19.525 [175/719] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.525 [176/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:19.783 [177/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:19.783 [178/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:19.783 [179/719] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:19.783 [180/719] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.783 [181/719] Linking static target lib/acl/libavx2_tmp.a 00:03:19.783 [182/719] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:19.783 [183/719] Linking static target lib/librte_hash.a 00:03:19.783 [184/719] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:19.783 [185/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:20.041 [186/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:20.041 [187/719] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:20.041 [188/719] Linking static target lib/librte_cfgfile.a 00:03:20.041 [189/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:20.041 [190/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:20.041 [191/719] Linking static target lib/librte_ethdev.a 00:03:20.300 [192/719] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.300 [193/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:20.300 [194/719] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.300 [195/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:20.300 [196/719] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:20.300 [197/719] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.558 [198/719] Linking target lib/librte_eal.so.25.0 00:03:20.558 [199/719] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:20.558 [200/719] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:03:20.558 [201/719] Linking target lib/librte_ring.so.25.0 00:03:20.558 [202/719] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:20.558 [203/719] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:20.558 [204/719] Linking target lib/librte_meter.so.25.0 00:03:20.558 [205/719] Linking target lib/librte_pci.so.25.0 00:03:20.558 [206/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:20.815 [207/719] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:03:20.815 [208/719] Linking target lib/librte_timer.so.25.0 00:03:20.816 [209/719] Linking target lib/librte_rcu.so.25.0 00:03:20.816 [210/719] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:03:20.816 [211/719] Linking target lib/librte_mempool.so.25.0 00:03:20.816 [212/719] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:03:20.816 [213/719] Linking static target lib/librte_bpf.a 00:03:20.816 [214/719] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:20.816 [215/719] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:03:20.816 [216/719] Linking static target lib/librte_compressdev.a 00:03:20.816 [217/719] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:03:20.816 [218/719] Linking target lib/librte_cfgfile.so.25.0 00:03:20.816 [219/719] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:03:20.816 [220/719] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:20.816 [221/719] Linking target lib/librte_mbuf.so.25.0 00:03:21.074 [222/719] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:03:21.074 [223/719] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:21.074 [224/719] Linking target lib/librte_net.so.25.0 00:03:21.074 [225/719] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.074 [226/719] Linking target lib/librte_bbdev.so.25.0 00:03:21.074 [227/719] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:21.074 [228/719] Linking static target lib/librte_acl.a 00:03:21.074 [229/719] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:03:21.074 [230/719] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:21.074 [231/719] Linking target lib/librte_cmdline.so.25.0 00:03:21.074 [232/719] Linking static target lib/librte_distributor.a 00:03:21.074 [233/719] Linking target lib/librte_hash.so.25.0 00:03:21.074 [234/719] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:21.332 [235/719] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.332 [236/719] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:03:21.332 [237/719] Linking target lib/librte_compressdev.so.25.0 00:03:21.332 [238/719] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.332 [239/719] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.332 [240/719] Linking target lib/librte_distributor.so.25.0 00:03:21.332 [241/719] Linking target lib/librte_acl.so.25.0 00:03:21.332 [242/719] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:21.593 [243/719] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:03:21.593 [244/719] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:21.593 [245/719] Linking static target lib/librte_dmadev.a 00:03:21.853 [246/719] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:21.853 [247/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:21.853 [248/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:21.853 [249/719] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:21.853 [250/719] Linking static target lib/librte_efd.a 00:03:21.853 [251/719] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.112 [252/719] Linking target lib/librte_dmadev.so.25.0 00:03:22.112 [253/719] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:03:22.112 [254/719] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.112 [255/719] Linking target lib/librte_efd.so.25.0 00:03:22.370 [256/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:22.370 [257/719] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:22.370 [258/719] Linking static target lib/librte_dispatcher.a 00:03:22.370 [259/719] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:22.370 [260/719] Linking static target lib/librte_cryptodev.a 00:03:22.628 [261/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:22.628 [262/719] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:22.628 [263/719] Linking static target lib/librte_gpudev.a 00:03:22.628 [264/719] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.628 [265/719] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:22.886 [266/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:22.886 [267/719] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:22.886 [268/719] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:23.144 [269/719] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:23.144 [270/719] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:23.144 [271/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:23.144 [272/719] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.144 [273/719] Linking target lib/librte_gpudev.so.25.0 00:03:23.402 [274/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:23.402 [275/719] Linking static target lib/librte_eventdev.a 00:03:23.402 [276/719] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:23.402 [277/719] Linking static target lib/librte_gro.a 00:03:23.402 [278/719] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:23.402 [279/719] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:23.402 [280/719] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:23.402 [281/719] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.402 [282/719] Linking target lib/librte_cryptodev.so.25.0 00:03:23.402 [283/719] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.660 [284/719] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:23.660 [285/719] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:23.660 [286/719] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:23.660 [287/719] Linking static target lib/librte_gso.a 00:03:23.660 [288/719] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:03:23.660 [289/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:23.660 [290/719] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.918 [291/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:23.918 [292/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:23.918 [293/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:23.918 [294/719] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:23.918 [295/719] Linking static target lib/librte_jobstats.a 00:03:23.918 [296/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:23.918 [297/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:23.918 [298/719] Linking static target lib/librte_ip_frag.a 00:03:24.180 [299/719] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.180 [300/719] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:24.180 [301/719] Linking target lib/librte_jobstats.so.25.0 00:03:24.180 [302/719] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:24.180 [303/719] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.180 [304/719] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:24.180 [305/719] Linking static target lib/librte_latencystats.a 00:03:24.444 [306/719] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:24.444 [307/719] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.444 [308/719] Linking target lib/librte_ethdev.so.25.0 00:03:24.444 [309/719] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.444 [310/719] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:03:24.444 [311/719] Linking target lib/librte_metrics.so.25.0 00:03:24.444 [312/719] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:24.444 [313/719] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:24.702 [314/719] Linking target lib/librte_gro.so.25.0 00:03:24.702 [315/719] Linking target lib/librte_bpf.so.25.0 00:03:24.702 [316/719] Linking target lib/librte_gso.so.25.0 00:03:24.702 [317/719] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:03:24.702 [318/719] Linking target lib/librte_ip_frag.so.25.0 00:03:24.702 [319/719] Linking target lib/librte_bitratestats.so.25.0 00:03:24.702 [320/719] Linking target lib/librte_latencystats.so.25.0 00:03:24.702 [321/719] Linking static target lib/librte_lpm.a 00:03:24.702 [322/719] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:24.702 [323/719] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:03:24.702 [324/719] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:03:24.702 [325/719] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:24.702 [326/719] Linking static target lib/librte_pcapng.a 00:03:24.702 [327/719] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:24.702 [328/719] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:24.702 [329/719] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:24.960 [330/719] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.960 [331/719] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:24.960 [332/719] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:24.960 [333/719] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.960 [334/719] Linking target lib/librte_pcapng.so.25.0 00:03:24.960 [335/719] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.960 [336/719] Linking target lib/librte_eventdev.so.25.0 00:03:24.960 [337/719] Linking target lib/librte_lpm.so.25.0 00:03:24.960 [338/719] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:03:25.218 [339/719] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:03:25.218 [340/719] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:25.218 [341/719] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:03:25.218 [342/719] Linking target lib/librte_dispatcher.so.25.0 00:03:25.218 [343/719] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:25.218 [344/719] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:25.218 [345/719] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:25.218 [346/719] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:25.218 [347/719] Linking static target lib/librte_member.a 00:03:25.475 [348/719] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:25.475 [349/719] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:25.475 [350/719] Linking static target lib/librte_power.a 00:03:25.475 [351/719] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:25.475 [352/719] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:25.475 [353/719] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:25.475 [354/719] Linking static target lib/librte_regexdev.a 00:03:25.475 [355/719] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:25.475 [356/719] Linking static target lib/librte_rawdev.a 00:03:25.475 [357/719] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:25.475 [358/719] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.475 [359/719] Linking target lib/librte_member.so.25.0 00:03:25.733 [360/719] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:25.733 [361/719] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:25.733 [362/719] Linking static target lib/librte_mldev.a 00:03:25.733 [363/719] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:25.733 [364/719] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:25.733 [365/719] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.733 [366/719] Linking target lib/librte_power.so.25.0 00:03:25.733 [367/719] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:25.991 [368/719] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:25.991 [369/719] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.991 [370/719] Linking static target lib/librte_reorder.a 00:03:25.991 [371/719] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:25.991 [372/719] Linking target lib/librte_rawdev.so.25.0 00:03:25.991 [373/719] Linking static target lib/librte_rib.a 00:03:25.991 [374/719] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.991 [375/719] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:25.991 [376/719] Linking target lib/librte_regexdev.so.25.0 00:03:25.991 [377/719] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:25.991 [378/719] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:25.991 [379/719] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.248 [380/719] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:26.248 [381/719] Linking static target lib/librte_stack.a 00:03:26.248 [382/719] Linking target lib/librte_reorder.so.25.0 00:03:26.248 [383/719] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:26.248 [384/719] Linking static target lib/librte_security.a 00:03:26.248 [385/719] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:03:26.248 [386/719] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.248 [387/719] Linking target lib/librte_rib.so.25.0 00:03:26.248 [388/719] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.248 [389/719] Linking target lib/librte_stack.so.25.0 00:03:26.248 [390/719] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:26.248 [391/719] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:03:26.506 [392/719] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:26.506 [393/719] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.506 [394/719] Linking target lib/librte_security.so.25.0 00:03:26.506 [395/719] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:26.506 [396/719] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:26.764 [397/719] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:03:26.764 [398/719] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:26.764 [399/719] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:26.764 [400/719] Linking static target lib/librte_sched.a 00:03:26.764 [401/719] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.764 [402/719] Linking target lib/librte_mldev.so.25.0 00:03:27.021 [403/719] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:27.021 [404/719] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.021 [405/719] Linking target lib/librte_sched.so.25.0 00:03:27.021 [406/719] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:03:27.021 [407/719] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:27.279 [408/719] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:27.279 [409/719] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:27.537 [410/719] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:27.537 [411/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:27.537 [412/719] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:27.537 [413/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:27.537 [414/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:27.795 [415/719] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:27.795 [416/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:27.795 [417/719] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:27.795 [418/719] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:27.795 [419/719] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:28.053 [420/719] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:28.053 [421/719] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:28.311 [422/719] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:28.311 [423/719] Linking static target lib/librte_ipsec.a 00:03:28.311 [424/719] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:28.311 [425/719] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:28.311 [426/719] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.569 [427/719] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:28.569 [428/719] Linking target lib/librte_ipsec.so.25.0 00:03:28.569 [429/719] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:28.569 [430/719] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:03:28.827 [431/719] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:28.827 [432/719] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:28.827 [433/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:28.827 [434/719] Linking static target lib/librte_pdcp.a 00:03:28.827 [435/719] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:28.827 [436/719] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:28.827 [437/719] Linking static target lib/librte_fib.a 00:03:28.827 [438/719] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:29.086 [439/719] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.086 [440/719] Linking target lib/librte_pdcp.so.25.0 00:03:29.086 [441/719] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.086 [442/719] Linking target lib/librte_fib.so.25.0 00:03:29.344 [443/719] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:29.344 [444/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:29.344 [445/719] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:29.344 [446/719] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:29.344 [447/719] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:29.344 [448/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:29.602 [449/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:29.602 [450/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:29.860 [451/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:29.860 [452/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:29.860 [453/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:29.860 [454/719] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:29.860 [455/719] Linking static target lib/librte_port.a 00:03:29.860 [456/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:29.861 [457/719] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:29.861 [458/719] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:29.861 [459/719] Linking static target lib/librte_pdump.a 00:03:30.119 [460/719] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:30.119 [461/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:30.119 [462/719] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.119 [463/719] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.119 [464/719] Linking target lib/librte_pdump.so.25.0 00:03:30.119 [465/719] Linking target lib/librte_port.so.25.0 00:03:30.377 [466/719] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:03:30.377 [467/719] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:30.377 [468/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:30.377 [469/719] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:30.377 [470/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:30.377 [471/719] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:30.635 [472/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:30.635 [473/719] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:30.635 [474/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:30.635 [475/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:30.893 [476/719] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:30.893 [477/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:30.893 [478/719] Linking static target lib/librte_table.a 00:03:30.893 [479/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:31.150 [480/719] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:31.150 [481/719] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.151 [482/719] Linking target lib/librte_table.so.25.0 00:03:31.151 [483/719] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:31.151 [484/719] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:31.408 [485/719] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:03:31.408 [486/719] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:31.408 [487/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:31.666 [488/719] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:31.666 [489/719] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:31.667 [490/719] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:31.667 [491/719] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:31.667 [492/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:31.924 [493/719] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:31.925 [494/719] Linking static target lib/librte_graph.a 00:03:31.925 [495/719] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:31.925 [496/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:31.925 [497/719] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:32.182 [498/719] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:32.182 [499/719] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:32.182 [500/719] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:32.440 [501/719] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.440 [502/719] Linking target lib/librte_graph.so.25.0 00:03:32.440 [503/719] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:32.440 [504/719] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:03:32.440 [505/719] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:32.698 [506/719] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:32.698 [507/719] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:32.698 [508/719] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:32.698 [509/719] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:32.698 [510/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:32.956 [511/719] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:32.956 [512/719] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:32.956 [513/719] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:32.956 [514/719] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:32.956 [515/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:32.956 [516/719] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:33.214 [517/719] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:33.214 [518/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:33.214 [519/719] Linking static target lib/librte_node.a 00:03:33.214 [520/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:33.214 [521/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:33.214 [522/719] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.471 [523/719] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:33.471 [524/719] Linking target lib/librte_node.so.25.0 00:03:33.471 [525/719] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:33.471 [526/719] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:33.471 [527/719] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:33.471 [528/719] Linking static target drivers/librte_bus_vdev.a 00:03:33.729 [529/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:33.729 [530/719] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:33.729 [531/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:33.729 [532/719] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:33.729 [533/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:33.729 [534/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:33.729 [535/719] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.729 [536/719] Linking target drivers/librte_bus_vdev.so.25.0 00:03:33.729 [537/719] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:33.729 [538/719] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.730 [539/719] Linking static target drivers/librte_bus_pci.a 00:03:33.730 [540/719] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:33.730 [541/719] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:33.730 [542/719] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:33.730 [543/719] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:03:33.987 [544/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:33.987 [545/719] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:33.987 [546/719] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.987 [547/719] Linking static target drivers/librte_mempool_ring.a 00:03:33.987 [548/719] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:33.987 [549/719] Linking target drivers/librte_mempool_ring.so.25.0 00:03:34.245 [550/719] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.245 [551/719] Linking target drivers/librte_bus_pci.so.25.0 00:03:34.245 [552/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:34.245 [553/719] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:03:34.245 [554/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:34.502 [555/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:34.502 [556/719] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:35.067 [557/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:35.067 [558/719] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:35.067 [559/719] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:35.324 [560/719] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:35.324 [561/719] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:35.324 [562/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:35.324 [563/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:35.324 [564/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:35.581 [565/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:35.581 [566/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:35.581 [567/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:35.581 [568/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:35.581 [569/719] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:35.838 [570/719] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:36.096 [571/719] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:36.353 [572/719] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:36.353 [573/719] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:36.353 [574/719] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:36.353 [575/719] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:36.353 [576/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:36.610 [577/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:36.610 [578/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:36.610 [579/719] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:36.610 [580/719] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:36.610 [581/719] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:36.868 [582/719] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:36.868 [583/719] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:36.868 [584/719] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:36.868 [585/719] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:36.868 [586/719] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:36.868 [587/719] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:37.127 [588/719] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:37.127 [589/719] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:37.127 [590/719] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:37.127 [591/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:37.127 [592/719] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:37.385 [593/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:37.385 [594/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:37.385 [595/719] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:37.644 [596/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:37.644 [597/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:37.644 [598/719] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:37.644 [599/719] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:37.644 [600/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:37.644 [601/719] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:37.644 [602/719] Linking static target drivers/librte_net_i40e.a 00:03:37.644 [603/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:37.902 [604/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:37.902 [605/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:37.902 [606/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:38.160 [607/719] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:38.160 [608/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:38.160 [609/719] Linking static target lib/librte_vhost.a 00:03:38.160 [610/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:38.418 [611/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:38.418 [612/719] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.418 [613/719] Linking target drivers/librte_net_i40e.so.25.0 00:03:38.418 [614/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:38.418 [615/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:38.418 [616/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:38.676 [617/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:38.676 [618/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:38.676 [619/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:38.934 [620/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:38.934 [621/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:38.934 [622/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:38.934 [623/719] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:38.934 [624/719] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.934 [625/719] Linking target lib/librte_vhost.so.25.0 00:03:38.934 [626/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:39.192 [627/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:39.192 [628/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:39.449 [629/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:39.449 [630/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:39.449 [631/719] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:40.014 [632/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:40.014 [633/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:40.014 [634/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:40.014 [635/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:40.014 [636/719] Linking static target lib/librte_pipeline.a 00:03:40.014 [637/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:40.014 [638/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:40.014 [639/719] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:40.014 [640/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:40.272 [641/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:40.272 [642/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:40.272 [643/719] Linking target app/dpdk-dumpcap 00:03:40.272 [644/719] Linking target app/dpdk-graph 00:03:40.272 [645/719] Linking target app/dpdk-pdump 00:03:40.529 [646/719] Linking target app/dpdk-test-acl 00:03:40.529 [647/719] Linking target app/dpdk-proc-info 00:03:40.529 [648/719] Linking target app/dpdk-test-cmdline 00:03:40.529 [649/719] Linking target app/dpdk-test-compress-perf 00:03:40.529 [650/719] Linking target app/dpdk-test-crypto-perf 00:03:40.529 [651/719] Linking target app/dpdk-test-dma-perf 00:03:40.787 [652/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:40.787 [653/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:40.787 [654/719] Linking target app/dpdk-test-fib 00:03:40.787 [655/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:40.787 [656/719] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:41.045 [657/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:41.045 [658/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:41.045 [659/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:41.045 [660/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:41.045 [661/719] Linking target app/dpdk-test-gpudev 00:03:41.045 [662/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:41.045 [663/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:41.303 [664/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:41.303 [665/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:41.303 [666/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:41.303 [667/719] Linking target app/dpdk-test-flow-perf 00:03:41.303 [668/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:41.303 [669/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:41.560 [670/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:41.560 [671/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:41.560 [672/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:41.560 [673/719] Linking target app/dpdk-test-eventdev 00:03:41.560 [674/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:41.818 [675/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:41.818 [676/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:41.818 [677/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:41.818 [678/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:42.075 [679/719] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.075 [680/719] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:42.075 [681/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:42.075 [682/719] Linking target app/dpdk-test-bbdev 00:03:42.075 [683/719] Linking target lib/librte_pipeline.so.25.0 00:03:42.075 [684/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:42.333 [685/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:42.333 [686/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:42.333 [687/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:42.333 [688/719] Linking target app/dpdk-test-pipeline 00:03:42.333 [689/719] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:42.591 [690/719] Linking target app/dpdk-test-mldev 00:03:42.591 [691/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:42.591 [692/719] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:42.849 [693/719] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:42.849 [694/719] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:42.849 [695/719] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:42.849 [696/719] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:43.107 [697/719] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:43.107 [698/719] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:43.107 [699/719] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:43.107 [700/719] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:43.107 [701/719] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:43.107 [702/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:43.365 [703/719] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:43.623 [704/719] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:43.623 [705/719] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:43.623 [706/719] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:43.623 [707/719] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:43.881 [708/719] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:43.881 [709/719] Linking target app/dpdk-test-sad 00:03:43.881 [710/719] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:44.140 [711/719] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:44.140 [712/719] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:44.140 [713/719] Linking target app/dpdk-test-regex 00:03:44.140 [714/719] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:44.398 [715/719] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:44.398 [716/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:44.656 [717/719] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:44.656 [718/719] Linking target app/dpdk-test-security-perf 00:03:44.914 [719/719] Linking target app/dpdk-testpmd 00:03:44.914 17:38:04 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:44.914 17:38:04 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:44.914 17:38:04 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:44.914 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:44.914 [0/1] Installing files. 00:03:45.179 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:45.179 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:45.179 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.180 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.181 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.182 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.183 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.464 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.464 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.464 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:45.464 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:45.464 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:45.464 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:45.464 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:45.464 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:45.464 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:45.465 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:45.465 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.465 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.466 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:45.727 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:45.727 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:45.727 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.727 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:45.727 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.727 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.728 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.728 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.728 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.728 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.728 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.728 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.728 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.729 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.730 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.731 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:45.732 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:45.732 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:03:45.732 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:45.732 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:03:45.732 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:45.732 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:03:45.732 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:45.732 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:03:45.732 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:45.732 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:03:45.732 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:45.732 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:03:45.732 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:45.732 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:03:45.732 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:45.732 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:03:45.732 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:45.732 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:03:45.732 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:45.732 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:03:45.732 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:45.732 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:03:45.732 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:45.732 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:03:45.732 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:45.732 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:03:45.732 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:45.732 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:03:45.732 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:45.732 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:03:45.732 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:45.732 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:03:45.732 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:45.732 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:03:45.732 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:45.732 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:03:45.732 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:45.732 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:03:45.732 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:45.733 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:03:45.733 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:45.733 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:03:45.733 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:45.733 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:03:45.733 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:45.733 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:03:45.733 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:45.733 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:03:45.733 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:45.733 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:03:45.733 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:45.733 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:03:45.733 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:45.733 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:03:45.733 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:45.733 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:03:45.733 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:45.733 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:03:45.733 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:45.733 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:03:45.733 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:45.733 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:03:45.733 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:45.733 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:03:45.733 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:45.733 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:03:45.733 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:45.733 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:03:45.733 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:45.733 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:03:45.733 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:45.733 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:03:45.733 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:45.733 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:45.733 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:45.733 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:45.733 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:45.733 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:45.733 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:45.733 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:45.733 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:45.733 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:45.733 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:45.733 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:45.733 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:45.733 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:03:45.733 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:45.733 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:03:45.733 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:45.733 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:03:45.733 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:45.733 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:03:45.733 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:45.733 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:03:45.733 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:45.733 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:03:45.733 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:45.733 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:03:45.733 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:45.733 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:03:45.733 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:45.733 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:03:45.733 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:45.733 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:03:45.733 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:45.733 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:03:45.733 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:45.733 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:03:45.733 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:45.733 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:03:45.733 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:45.734 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:03:45.734 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:45.734 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:03:45.734 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:45.734 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:03:45.734 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:45.734 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:03:45.734 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:45.734 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:03:45.734 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:45.734 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:03:45.734 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:45.734 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:03:45.734 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:45.734 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:03:45.734 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:45.734 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:45.734 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:45.734 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:45.734 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:45.734 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:45.734 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:45.734 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:45.734 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:45.734 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:45.734 17:38:05 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:45.734 17:38:05 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:45.734 00:03:45.734 real 0m38.836s 00:03:45.734 user 4m29.461s 00:03:45.734 sys 0m42.665s 00:03:45.734 ************************************ 00:03:45.734 END TEST build_native_dpdk 00:03:45.734 ************************************ 00:03:45.734 17:38:05 build_native_dpdk -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:45.734 17:38:05 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:45.734 17:38:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:45.734 17:38:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:45.734 17:38:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:45.734 17:38:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:45.734 17:38:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:45.734 17:38:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:45.734 17:38:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:45.734 17:38:05 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:45.993 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:45.993 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.993 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:45.993 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:46.251 Using 'verbs' RDMA provider 00:03:59.397 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:09.407 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:09.407 Creating mk/config.mk...done. 00:04:09.407 Creating mk/cc.flags.mk...done. 00:04:09.407 Type 'make' to build. 00:04:09.407 17:38:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:09.407 17:38:29 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:04:09.407 17:38:29 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:04:09.407 17:38:29 -- common/autotest_common.sh@10 -- $ set +x 00:04:09.407 ************************************ 00:04:09.407 START TEST make 00:04:09.407 ************************************ 00:04:09.407 17:38:29 make -- common/autotest_common.sh@1127 -- $ make -j10 00:04:09.407 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:09.407 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:09.407 meson setup builddir \ 00:04:09.407 -Dwith-libaio=enabled \ 00:04:09.407 -Dwith-liburing=enabled \ 00:04:09.407 -Dwith-libvfn=disabled \ 00:04:09.407 -Dwith-spdk=disabled \ 00:04:09.407 -Dexamples=false \ 00:04:09.407 -Dtests=false \ 00:04:09.407 -Dtools=false && \ 00:04:09.407 meson compile -C builddir && \ 00:04:09.407 cd -) 00:04:09.407 make[1]: Nothing to be done for 'all'. 00:04:11.306 The Meson build system 00:04:11.306 Version: 1.5.0 00:04:11.306 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:11.306 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:11.306 Build type: native build 00:04:11.306 Project name: xnvme 00:04:11.307 Project version: 0.7.5 00:04:11.307 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:11.307 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:11.307 Host machine cpu family: x86_64 00:04:11.307 Host machine cpu: x86_64 00:04:11.307 Message: host_machine.system: linux 00:04:11.307 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:11.307 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:11.307 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:11.307 Run-time dependency threads found: YES 00:04:11.307 Has header "setupapi.h" : NO 00:04:11.307 Has header "linux/blkzoned.h" : YES 00:04:11.307 Has header "linux/blkzoned.h" : YES (cached) 00:04:11.307 Has header "libaio.h" : YES 00:04:11.307 Library aio found: YES 00:04:11.307 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:11.307 Run-time dependency liburing found: YES 2.2 00:04:11.307 Dependency libvfn skipped: feature with-libvfn disabled 00:04:11.307 Found CMake: /usr/bin/cmake (3.27.7) 00:04:11.307 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:11.307 Subproject spdk : skipped: feature with-spdk disabled 00:04:11.307 Run-time dependency appleframeworks found: NO (tried framework) 00:04:11.307 Run-time dependency appleframeworks found: NO (tried framework) 00:04:11.307 Library rt found: YES 00:04:11.307 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:11.307 Configuring xnvme_config.h using configuration 00:04:11.307 Configuring xnvme.spec using configuration 00:04:11.307 Run-time dependency bash-completion found: YES 2.11 00:04:11.307 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:11.307 Program cp found: YES (/usr/bin/cp) 00:04:11.307 Build targets in project: 3 00:04:11.307 00:04:11.307 xnvme 0.7.5 00:04:11.307 00:04:11.307 Subprojects 00:04:11.307 spdk : NO Feature 'with-spdk' disabled 00:04:11.307 00:04:11.307 User defined options 00:04:11.307 examples : false 00:04:11.307 tests : false 00:04:11.307 tools : false 00:04:11.307 with-libaio : enabled 00:04:11.307 with-liburing: enabled 00:04:11.307 with-libvfn : disabled 00:04:11.307 with-spdk : disabled 00:04:11.307 00:04:11.307 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:11.564 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:11.564 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:04:11.564 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:04:11.564 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:04:11.564 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:04:11.564 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:04:11.564 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:04:11.564 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:04:11.564 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:04:11.564 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:04:11.564 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:04:11.564 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:04:11.564 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:04:11.822 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:04:11.822 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:04:11.822 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:04:11.822 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:04:11.822 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:04:11.822 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:04:11.822 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:04:11.822 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:04:11.822 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:04:11.822 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:04:11.822 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:04:11.822 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:04:11.822 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:04:11.822 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:04:11.822 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:04:11.822 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:04:11.822 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:04:11.822 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:04:11.822 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:04:11.822 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:04:11.822 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:04:11.822 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:04:11.822 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:04:11.822 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:04:11.822 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:04:11.822 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:04:11.822 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:04:11.822 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:04:11.822 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:04:11.822 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:04:11.822 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:04:11.822 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:04:12.080 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:04:12.080 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:04:12.080 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:04:12.080 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:04:12.080 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:04:12.080 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:04:12.080 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:04:12.080 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:04:12.080 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:04:12.080 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:04:12.080 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:04:12.080 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:04:12.080 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:04:12.080 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:04:12.080 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:04:12.080 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:04:12.080 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:04:12.080 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:04:12.080 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:04:12.080 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:04:12.080 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:04:12.080 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:04:12.080 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:04:12.080 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:04:12.080 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:04:12.338 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:04:12.338 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:04:12.338 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:04:12.338 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:12.597 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:12.597 [75/76] Linking static target lib/libxnvme.a 00:04:12.597 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:12.597 INFO: autodetecting backend as ninja 00:04:12.597 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:12.597 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:44.666 CC lib/ut/ut.o 00:04:44.666 CC lib/ut_mock/mock.o 00:04:44.666 CC lib/log/log.o 00:04:44.666 CC lib/log/log_flags.o 00:04:44.666 CC lib/log/log_deprecated.o 00:04:44.666 LIB libspdk_ut.a 00:04:44.666 LIB libspdk_ut_mock.a 00:04:44.666 SO libspdk_ut_mock.so.6.0 00:04:44.666 SO libspdk_ut.so.2.0 00:04:44.666 LIB libspdk_log.a 00:04:44.666 SO libspdk_log.so.7.1 00:04:44.666 SYMLINK libspdk_ut_mock.so 00:04:44.666 SYMLINK libspdk_ut.so 00:04:44.666 SYMLINK libspdk_log.so 00:04:44.666 CC lib/util/base64.o 00:04:44.666 CC lib/util/cpuset.o 00:04:44.666 CC lib/ioat/ioat.o 00:04:44.666 CC lib/util/crc16.o 00:04:44.666 CXX lib/trace_parser/trace.o 00:04:44.666 CC lib/util/crc32.o 00:04:44.666 CC lib/util/bit_array.o 00:04:44.666 CC lib/util/crc32c.o 00:04:44.666 CC lib/dma/dma.o 00:04:44.666 CC lib/vfio_user/host/vfio_user_pci.o 00:04:44.666 CC lib/util/crc32_ieee.o 00:04:44.666 CC lib/util/crc64.o 00:04:44.666 CC lib/util/dif.o 00:04:44.666 CC lib/util/fd.o 00:04:44.666 CC lib/util/fd_group.o 00:04:44.666 LIB libspdk_dma.a 00:04:44.666 SO libspdk_dma.so.5.0 00:04:44.666 CC lib/util/file.o 00:04:44.666 CC lib/vfio_user/host/vfio_user.o 00:04:44.666 CC lib/util/hexlify.o 00:04:44.666 SYMLINK libspdk_dma.so 00:04:44.666 CC lib/util/iov.o 00:04:44.666 CC lib/util/math.o 00:04:44.666 LIB libspdk_ioat.a 00:04:44.666 SO libspdk_ioat.so.7.0 00:04:44.666 CC lib/util/net.o 00:04:44.666 CC lib/util/pipe.o 00:04:44.666 SYMLINK libspdk_ioat.so 00:04:44.666 CC lib/util/strerror_tls.o 00:04:44.666 CC lib/util/string.o 00:04:44.666 CC lib/util/uuid.o 00:04:44.666 LIB libspdk_vfio_user.a 00:04:44.666 CC lib/util/xor.o 00:04:44.666 SO libspdk_vfio_user.so.5.0 00:04:44.666 CC lib/util/zipf.o 00:04:44.666 CC lib/util/md5.o 00:04:44.666 SYMLINK libspdk_vfio_user.so 00:04:44.924 LIB libspdk_util.a 00:04:44.924 LIB libspdk_trace_parser.a 00:04:45.182 SO libspdk_util.so.10.1 00:04:45.182 SO libspdk_trace_parser.so.6.0 00:04:45.182 SYMLINK libspdk_trace_parser.so 00:04:45.182 SYMLINK libspdk_util.so 00:04:45.440 CC lib/rdma_provider/common.o 00:04:45.440 CC lib/vmd/vmd.o 00:04:45.440 CC lib/env_dpdk/env.o 00:04:45.440 CC lib/json/json_util.o 00:04:45.440 CC lib/rdma_utils/rdma_utils.o 00:04:45.440 CC lib/json/json_parse.o 00:04:45.440 CC lib/vmd/led.o 00:04:45.440 CC lib/idxd/idxd.o 00:04:45.440 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:45.440 CC lib/conf/conf.o 00:04:45.440 CC lib/idxd/idxd_user.o 00:04:45.440 CC lib/idxd/idxd_kernel.o 00:04:45.440 LIB libspdk_rdma_provider.a 00:04:45.440 LIB libspdk_rdma_utils.a 00:04:45.440 SO libspdk_rdma_provider.so.6.0 00:04:45.440 SO libspdk_rdma_utils.so.1.0 00:04:45.440 LIB libspdk_conf.a 00:04:45.440 CC lib/json/json_write.o 00:04:45.697 CC lib/env_dpdk/memory.o 00:04:45.697 SO libspdk_conf.so.6.0 00:04:45.697 SYMLINK libspdk_rdma_utils.so 00:04:45.697 CC lib/env_dpdk/pci.o 00:04:45.697 SYMLINK libspdk_rdma_provider.so 00:04:45.697 CC lib/env_dpdk/init.o 00:04:45.697 SYMLINK libspdk_conf.so 00:04:45.697 CC lib/env_dpdk/threads.o 00:04:45.697 CC lib/env_dpdk/pci_ioat.o 00:04:45.697 CC lib/env_dpdk/pci_virtio.o 00:04:45.697 CC lib/env_dpdk/pci_vmd.o 00:04:45.697 CC lib/env_dpdk/pci_idxd.o 00:04:45.697 CC lib/env_dpdk/pci_event.o 00:04:45.956 LIB libspdk_json.a 00:04:45.956 CC lib/env_dpdk/sigbus_handler.o 00:04:45.956 SO libspdk_json.so.6.0 00:04:45.956 CC lib/env_dpdk/pci_dpdk.o 00:04:45.956 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:45.956 SYMLINK libspdk_json.so 00:04:45.956 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:45.956 LIB libspdk_idxd.a 00:04:45.956 SO libspdk_idxd.so.12.1 00:04:45.956 LIB libspdk_vmd.a 00:04:45.956 SYMLINK libspdk_idxd.so 00:04:45.956 SO libspdk_vmd.so.6.0 00:04:45.956 CC lib/jsonrpc/jsonrpc_server.o 00:04:46.214 CC lib/jsonrpc/jsonrpc_client.o 00:04:46.214 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:46.214 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:46.214 SYMLINK libspdk_vmd.so 00:04:46.214 LIB libspdk_jsonrpc.a 00:04:46.475 SO libspdk_jsonrpc.so.6.0 00:04:46.475 SYMLINK libspdk_jsonrpc.so 00:04:46.734 CC lib/rpc/rpc.o 00:04:46.734 LIB libspdk_env_dpdk.a 00:04:46.734 SO libspdk_env_dpdk.so.15.1 00:04:46.992 LIB libspdk_rpc.a 00:04:46.992 SO libspdk_rpc.so.6.0 00:04:46.992 SYMLINK libspdk_env_dpdk.so 00:04:46.992 SYMLINK libspdk_rpc.so 00:04:47.250 CC lib/keyring/keyring.o 00:04:47.250 CC lib/keyring/keyring_rpc.o 00:04:47.250 CC lib/trace/trace_flags.o 00:04:47.250 CC lib/notify/notify.o 00:04:47.250 CC lib/notify/notify_rpc.o 00:04:47.250 CC lib/trace/trace_rpc.o 00:04:47.250 CC lib/trace/trace.o 00:04:47.250 LIB libspdk_notify.a 00:04:47.250 SO libspdk_notify.so.6.0 00:04:47.250 SYMLINK libspdk_notify.so 00:04:47.508 LIB libspdk_trace.a 00:04:47.508 SO libspdk_trace.so.11.0 00:04:47.508 LIB libspdk_keyring.a 00:04:47.508 SO libspdk_keyring.so.2.0 00:04:47.508 SYMLINK libspdk_trace.so 00:04:47.508 SYMLINK libspdk_keyring.so 00:04:47.767 CC lib/thread/thread.o 00:04:47.767 CC lib/thread/iobuf.o 00:04:47.767 CC lib/sock/sock.o 00:04:47.767 CC lib/sock/sock_rpc.o 00:04:48.026 LIB libspdk_sock.a 00:04:48.026 SO libspdk_sock.so.10.0 00:04:48.285 SYMLINK libspdk_sock.so 00:04:48.544 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:48.544 CC lib/nvme/nvme_ctrlr.o 00:04:48.544 CC lib/nvme/nvme_fabric.o 00:04:48.544 CC lib/nvme/nvme_ns_cmd.o 00:04:48.544 CC lib/nvme/nvme_ns.o 00:04:48.544 CC lib/nvme/nvme_qpair.o 00:04:48.544 CC lib/nvme/nvme_pcie_common.o 00:04:48.544 CC lib/nvme/nvme_pcie.o 00:04:48.544 CC lib/nvme/nvme.o 00:04:49.109 CC lib/nvme/nvme_quirks.o 00:04:49.109 CC lib/nvme/nvme_transport.o 00:04:49.109 CC lib/nvme/nvme_discovery.o 00:04:49.109 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:49.109 LIB libspdk_thread.a 00:04:49.109 SO libspdk_thread.so.11.0 00:04:49.109 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:49.109 CC lib/nvme/nvme_tcp.o 00:04:49.367 CC lib/nvme/nvme_opal.o 00:04:49.367 SYMLINK libspdk_thread.so 00:04:49.367 CC lib/nvme/nvme_io_msg.o 00:04:49.367 CC lib/nvme/nvme_poll_group.o 00:04:49.625 CC lib/nvme/nvme_zns.o 00:04:49.625 CC lib/nvme/nvme_stubs.o 00:04:49.625 CC lib/accel/accel.o 00:04:49.625 CC lib/accel/accel_rpc.o 00:04:49.625 CC lib/accel/accel_sw.o 00:04:49.882 CC lib/blob/blobstore.o 00:04:49.882 CC lib/nvme/nvme_auth.o 00:04:49.882 CC lib/init/json_config.o 00:04:49.882 CC lib/blob/request.o 00:04:49.882 CC lib/init/subsystem.o 00:04:50.139 CC lib/init/subsystem_rpc.o 00:04:50.139 CC lib/blob/zeroes.o 00:04:50.139 CC lib/init/rpc.o 00:04:50.139 CC lib/blob/blob_bs_dev.o 00:04:50.139 CC lib/nvme/nvme_cuse.o 00:04:50.397 LIB libspdk_init.a 00:04:50.397 CC lib/virtio/virtio.o 00:04:50.397 SO libspdk_init.so.6.0 00:04:50.397 CC lib/virtio/virtio_vhost_user.o 00:04:50.397 CC lib/virtio/virtio_vfio_user.o 00:04:50.397 SYMLINK libspdk_init.so 00:04:50.397 CC lib/fsdev/fsdev.o 00:04:50.654 CC lib/fsdev/fsdev_io.o 00:04:50.654 CC lib/fsdev/fsdev_rpc.o 00:04:50.654 CC lib/nvme/nvme_rdma.o 00:04:50.654 CC lib/virtio/virtio_pci.o 00:04:50.913 LIB libspdk_accel.a 00:04:50.913 SO libspdk_accel.so.16.0 00:04:50.913 CC lib/event/app.o 00:04:50.913 CC lib/event/log_rpc.o 00:04:50.913 CC lib/event/reactor.o 00:04:50.913 SYMLINK libspdk_accel.so 00:04:50.913 CC lib/event/app_rpc.o 00:04:50.913 CC lib/event/scheduler_static.o 00:04:50.913 LIB libspdk_virtio.a 00:04:50.913 LIB libspdk_fsdev.a 00:04:50.913 SO libspdk_virtio.so.7.0 00:04:50.913 SO libspdk_fsdev.so.2.0 00:04:51.171 SYMLINK libspdk_virtio.so 00:04:51.171 SYMLINK libspdk_fsdev.so 00:04:51.171 CC lib/bdev/bdev_zone.o 00:04:51.171 CC lib/bdev/bdev.o 00:04:51.171 CC lib/bdev/bdev_rpc.o 00:04:51.171 CC lib/bdev/part.o 00:04:51.171 CC lib/bdev/scsi_nvme.o 00:04:51.171 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:51.429 LIB libspdk_event.a 00:04:51.429 SO libspdk_event.so.14.0 00:04:51.429 SYMLINK libspdk_event.so 00:04:51.997 LIB libspdk_fuse_dispatcher.a 00:04:51.997 SO libspdk_fuse_dispatcher.so.1.0 00:04:51.997 SYMLINK libspdk_fuse_dispatcher.so 00:04:51.997 LIB libspdk_nvme.a 00:04:52.255 SO libspdk_nvme.so.15.0 00:04:52.514 SYMLINK libspdk_nvme.so 00:04:53.109 LIB libspdk_blob.a 00:04:53.109 SO libspdk_blob.so.11.0 00:04:53.109 SYMLINK libspdk_blob.so 00:04:53.369 CC lib/lvol/lvol.o 00:04:53.369 CC lib/blobfs/blobfs.o 00:04:53.369 CC lib/blobfs/tree.o 00:04:54.060 LIB libspdk_bdev.a 00:04:54.060 SO libspdk_bdev.so.17.0 00:04:54.317 SYMLINK libspdk_bdev.so 00:04:54.317 LIB libspdk_blobfs.a 00:04:54.317 SO libspdk_blobfs.so.10.0 00:04:54.317 CC lib/nvmf/ctrlr.o 00:04:54.317 CC lib/ftl/ftl_core.o 00:04:54.317 CC lib/ftl/ftl_layout.o 00:04:54.317 CC lib/ftl/ftl_init.o 00:04:54.317 CC lib/nvmf/ctrlr_discovery.o 00:04:54.317 CC lib/ublk/ublk.o 00:04:54.317 CC lib/scsi/dev.o 00:04:54.317 CC lib/nbd/nbd.o 00:04:54.318 SYMLINK libspdk_blobfs.so 00:04:54.318 CC lib/nvmf/ctrlr_bdev.o 00:04:54.318 LIB libspdk_lvol.a 00:04:54.574 SO libspdk_lvol.so.10.0 00:04:54.574 SYMLINK libspdk_lvol.so 00:04:54.574 CC lib/nvmf/subsystem.o 00:04:54.574 CC lib/nvmf/nvmf.o 00:04:54.574 CC lib/scsi/lun.o 00:04:54.832 CC lib/scsi/port.o 00:04:54.832 CC lib/scsi/scsi.o 00:04:54.832 CC lib/ftl/ftl_debug.o 00:04:54.832 CC lib/nbd/nbd_rpc.o 00:04:54.832 CC lib/ftl/ftl_io.o 00:04:54.832 CC lib/ftl/ftl_sb.o 00:04:54.832 CC lib/scsi/scsi_bdev.o 00:04:55.090 LIB libspdk_nbd.a 00:04:55.090 CC lib/scsi/scsi_pr.o 00:04:55.090 SO libspdk_nbd.so.7.0 00:04:55.090 CC lib/scsi/scsi_rpc.o 00:04:55.090 CC lib/ftl/ftl_l2p.o 00:04:55.090 CC lib/ublk/ublk_rpc.o 00:04:55.090 SYMLINK libspdk_nbd.so 00:04:55.090 CC lib/ftl/ftl_l2p_flat.o 00:04:55.090 CC lib/nvmf/nvmf_rpc.o 00:04:55.090 CC lib/nvmf/transport.o 00:04:55.090 LIB libspdk_ublk.a 00:04:55.348 CC lib/ftl/ftl_nv_cache.o 00:04:55.348 SO libspdk_ublk.so.3.0 00:04:55.348 CC lib/scsi/task.o 00:04:55.348 SYMLINK libspdk_ublk.so 00:04:55.348 CC lib/ftl/ftl_band.o 00:04:55.348 CC lib/ftl/ftl_band_ops.o 00:04:55.348 CC lib/ftl/ftl_writer.o 00:04:55.348 CC lib/nvmf/tcp.o 00:04:55.348 LIB libspdk_scsi.a 00:04:55.606 SO libspdk_scsi.so.9.0 00:04:55.606 SYMLINK libspdk_scsi.so 00:04:55.606 CC lib/nvmf/stubs.o 00:04:55.606 CC lib/nvmf/mdns_server.o 00:04:55.606 CC lib/ftl/ftl_rq.o 00:04:55.865 CC lib/iscsi/conn.o 00:04:55.865 CC lib/nvmf/rdma.o 00:04:55.865 CC lib/vhost/vhost.o 00:04:55.865 CC lib/vhost/vhost_rpc.o 00:04:56.122 CC lib/vhost/vhost_scsi.o 00:04:56.122 CC lib/iscsi/init_grp.o 00:04:56.122 CC lib/iscsi/iscsi.o 00:04:56.122 CC lib/iscsi/param.o 00:04:56.122 CC lib/nvmf/auth.o 00:04:56.380 CC lib/ftl/ftl_reloc.o 00:04:56.380 CC lib/iscsi/portal_grp.o 00:04:56.380 CC lib/vhost/vhost_blk.o 00:04:56.380 CC lib/vhost/rte_vhost_user.o 00:04:56.380 CC lib/iscsi/tgt_node.o 00:04:56.638 CC lib/ftl/ftl_l2p_cache.o 00:04:56.638 CC lib/ftl/ftl_p2l.o 00:04:56.896 CC lib/iscsi/iscsi_subsystem.o 00:04:56.896 CC lib/ftl/ftl_p2l_log.o 00:04:56.896 CC lib/iscsi/iscsi_rpc.o 00:04:56.896 CC lib/iscsi/task.o 00:04:57.154 CC lib/ftl/mngt/ftl_mngt.o 00:04:57.154 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:57.154 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:57.154 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:57.154 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:57.154 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:57.154 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:57.154 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:57.154 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:57.154 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:57.411 LIB libspdk_vhost.a 00:04:57.411 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:57.411 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:57.411 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:57.411 SO libspdk_vhost.so.8.0 00:04:57.411 CC lib/ftl/utils/ftl_conf.o 00:04:57.411 CC lib/ftl/utils/ftl_md.o 00:04:57.412 LIB libspdk_iscsi.a 00:04:57.412 SYMLINK libspdk_vhost.so 00:04:57.412 CC lib/ftl/utils/ftl_mempool.o 00:04:57.412 CC lib/ftl/utils/ftl_bitmap.o 00:04:57.412 CC lib/ftl/utils/ftl_property.o 00:04:57.412 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:57.412 SO libspdk_iscsi.so.8.0 00:04:57.669 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:57.669 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:57.669 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:57.669 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:57.669 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:57.669 SYMLINK libspdk_iscsi.so 00:04:57.669 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:57.669 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:57.669 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:57.669 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:57.669 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:57.669 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:57.669 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:57.669 CC lib/ftl/base/ftl_base_dev.o 00:04:57.928 CC lib/ftl/base/ftl_base_bdev.o 00:04:57.928 CC lib/ftl/ftl_trace.o 00:04:57.928 LIB libspdk_nvmf.a 00:04:57.928 SO libspdk_nvmf.so.20.0 00:04:57.928 LIB libspdk_ftl.a 00:04:58.188 SYMLINK libspdk_nvmf.so 00:04:58.188 SO libspdk_ftl.so.9.0 00:04:58.447 SYMLINK libspdk_ftl.so 00:04:58.705 CC module/env_dpdk/env_dpdk_rpc.o 00:04:58.705 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:58.705 CC module/accel/ioat/accel_ioat.o 00:04:58.705 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:58.705 CC module/fsdev/aio/fsdev_aio.o 00:04:58.964 CC module/sock/posix/posix.o 00:04:58.964 CC module/scheduler/gscheduler/gscheduler.o 00:04:58.964 CC module/accel/error/accel_error.o 00:04:58.964 CC module/blob/bdev/blob_bdev.o 00:04:58.964 LIB libspdk_env_dpdk_rpc.a 00:04:58.964 CC module/keyring/file/keyring.o 00:04:58.964 SO libspdk_env_dpdk_rpc.so.6.0 00:04:58.964 SYMLINK libspdk_env_dpdk_rpc.so 00:04:58.964 CC module/keyring/file/keyring_rpc.o 00:04:58.964 LIB libspdk_scheduler_gscheduler.a 00:04:58.964 SO libspdk_scheduler_gscheduler.so.4.0 00:04:58.964 LIB libspdk_scheduler_dpdk_governor.a 00:04:58.964 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:58.964 SYMLINK libspdk_scheduler_gscheduler.so 00:04:58.964 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:58.964 CC module/accel/error/accel_error_rpc.o 00:04:58.964 CC module/fsdev/aio/linux_aio_mgr.o 00:04:58.964 LIB libspdk_scheduler_dynamic.a 00:04:58.964 CC module/accel/ioat/accel_ioat_rpc.o 00:04:58.964 LIB libspdk_keyring_file.a 00:04:58.964 SO libspdk_scheduler_dynamic.so.4.0 00:04:58.964 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:58.964 SO libspdk_keyring_file.so.2.0 00:04:59.222 SYMLINK libspdk_keyring_file.so 00:04:59.222 SYMLINK libspdk_scheduler_dynamic.so 00:04:59.222 LIB libspdk_blob_bdev.a 00:04:59.222 SO libspdk_blob_bdev.so.11.0 00:04:59.222 LIB libspdk_accel_error.a 00:04:59.222 LIB libspdk_accel_ioat.a 00:04:59.222 SO libspdk_accel_error.so.2.0 00:04:59.222 SO libspdk_accel_ioat.so.6.0 00:04:59.222 SYMLINK libspdk_blob_bdev.so 00:04:59.222 SYMLINK libspdk_accel_error.so 00:04:59.222 SYMLINK libspdk_accel_ioat.so 00:04:59.222 CC module/accel/dsa/accel_dsa.o 00:04:59.222 CC module/accel/dsa/accel_dsa_rpc.o 00:04:59.222 CC module/keyring/linux/keyring.o 00:04:59.222 CC module/keyring/linux/keyring_rpc.o 00:04:59.222 CC module/accel/iaa/accel_iaa.o 00:04:59.222 CC module/accel/iaa/accel_iaa_rpc.o 00:04:59.481 LIB libspdk_fsdev_aio.a 00:04:59.481 SO libspdk_fsdev_aio.so.1.0 00:04:59.481 LIB libspdk_keyring_linux.a 00:04:59.481 CC module/blobfs/bdev/blobfs_bdev.o 00:04:59.481 CC module/bdev/delay/vbdev_delay.o 00:04:59.481 SO libspdk_keyring_linux.so.1.0 00:04:59.481 LIB libspdk_accel_dsa.a 00:04:59.481 LIB libspdk_accel_iaa.a 00:04:59.481 SYMLINK libspdk_fsdev_aio.so 00:04:59.481 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:59.481 SO libspdk_accel_dsa.so.5.0 00:04:59.481 SYMLINK libspdk_keyring_linux.so 00:04:59.481 SO libspdk_accel_iaa.so.3.0 00:04:59.481 CC module/bdev/error/vbdev_error.o 00:04:59.481 CC module/bdev/gpt/gpt.o 00:04:59.481 CC module/bdev/lvol/vbdev_lvol.o 00:04:59.481 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:59.481 SYMLINK libspdk_accel_dsa.so 00:04:59.481 SYMLINK libspdk_accel_iaa.so 00:04:59.481 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:59.481 CC module/bdev/error/vbdev_error_rpc.o 00:04:59.481 CC module/bdev/gpt/vbdev_gpt.o 00:04:59.739 LIB libspdk_sock_posix.a 00:04:59.739 CC module/bdev/malloc/bdev_malloc.o 00:04:59.739 SO libspdk_sock_posix.so.6.0 00:04:59.739 LIB libspdk_blobfs_bdev.a 00:04:59.739 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:59.739 SO libspdk_blobfs_bdev.so.6.0 00:04:59.739 SYMLINK libspdk_sock_posix.so 00:04:59.739 LIB libspdk_bdev_error.a 00:04:59.739 SYMLINK libspdk_blobfs_bdev.so 00:04:59.739 SO libspdk_bdev_error.so.6.0 00:04:59.739 LIB libspdk_bdev_delay.a 00:04:59.739 LIB libspdk_bdev_gpt.a 00:04:59.739 SO libspdk_bdev_delay.so.6.0 00:04:59.739 CC module/bdev/null/bdev_null.o 00:04:59.739 SO libspdk_bdev_gpt.so.6.0 00:04:59.739 SYMLINK libspdk_bdev_error.so 00:04:59.997 SYMLINK libspdk_bdev_gpt.so 00:04:59.997 CC module/bdev/nvme/bdev_nvme.o 00:04:59.997 SYMLINK libspdk_bdev_delay.so 00:04:59.997 CC module/bdev/null/bdev_null_rpc.o 00:04:59.997 CC module/bdev/passthru/vbdev_passthru.o 00:04:59.997 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:59.997 LIB libspdk_bdev_malloc.a 00:04:59.997 SO libspdk_bdev_malloc.so.6.0 00:04:59.997 CC module/bdev/raid/bdev_raid.o 00:04:59.997 CC module/bdev/split/vbdev_split.o 00:04:59.997 SYMLINK libspdk_bdev_malloc.so 00:04:59.997 LIB libspdk_bdev_lvol.a 00:04:59.997 CC module/bdev/raid/bdev_raid_rpc.o 00:04:59.997 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:59.997 CC module/bdev/raid/bdev_raid_sb.o 00:04:59.997 SO libspdk_bdev_lvol.so.6.0 00:04:59.997 LIB libspdk_bdev_null.a 00:04:59.997 SO libspdk_bdev_null.so.6.0 00:04:59.997 SYMLINK libspdk_bdev_lvol.so 00:05:00.256 CC module/bdev/raid/raid0.o 00:05:00.256 SYMLINK libspdk_bdev_null.so 00:05:00.256 CC module/bdev/raid/raid1.o 00:05:00.256 CC module/bdev/xnvme/bdev_xnvme.o 00:05:00.256 LIB libspdk_bdev_passthru.a 00:05:00.256 CC module/bdev/split/vbdev_split_rpc.o 00:05:00.256 SO libspdk_bdev_passthru.so.6.0 00:05:00.256 CC module/bdev/raid/concat.o 00:05:00.256 SYMLINK libspdk_bdev_passthru.so 00:05:00.256 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:00.256 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:00.256 LIB libspdk_bdev_split.a 00:05:00.513 SO libspdk_bdev_split.so.6.0 00:05:00.513 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:00.513 LIB libspdk_bdev_xnvme.a 00:05:00.513 CC module/bdev/nvme/nvme_rpc.o 00:05:00.513 CC module/bdev/aio/bdev_aio.o 00:05:00.513 LIB libspdk_bdev_zone_block.a 00:05:00.513 CC module/bdev/ftl/bdev_ftl.o 00:05:00.514 SO libspdk_bdev_xnvme.so.3.0 00:05:00.514 SYMLINK libspdk_bdev_split.so 00:05:00.514 CC module/bdev/aio/bdev_aio_rpc.o 00:05:00.514 SO libspdk_bdev_zone_block.so.6.0 00:05:00.514 SYMLINK libspdk_bdev_xnvme.so 00:05:00.514 SYMLINK libspdk_bdev_zone_block.so 00:05:00.514 CC module/bdev/nvme/bdev_mdns_client.o 00:05:00.514 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:00.514 CC module/bdev/iscsi/bdev_iscsi.o 00:05:00.772 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:00.772 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:00.772 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:00.772 CC module/bdev/nvme/vbdev_opal.o 00:05:00.772 LIB libspdk_bdev_aio.a 00:05:00.772 LIB libspdk_bdev_ftl.a 00:05:00.772 SO libspdk_bdev_aio.so.6.0 00:05:00.772 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:00.772 SO libspdk_bdev_ftl.so.6.0 00:05:00.772 SYMLINK libspdk_bdev_aio.so 00:05:00.772 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:00.772 SYMLINK libspdk_bdev_ftl.so 00:05:00.772 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:01.030 LIB libspdk_bdev_iscsi.a 00:05:01.030 SO libspdk_bdev_iscsi.so.6.0 00:05:01.030 SYMLINK libspdk_bdev_iscsi.so 00:05:01.030 LIB libspdk_bdev_raid.a 00:05:01.030 LIB libspdk_bdev_virtio.a 00:05:01.030 SO libspdk_bdev_virtio.so.6.0 00:05:01.030 SO libspdk_bdev_raid.so.6.0 00:05:01.030 SYMLINK libspdk_bdev_virtio.so 00:05:01.288 SYMLINK libspdk_bdev_raid.so 00:05:01.855 LIB libspdk_bdev_nvme.a 00:05:02.122 SO libspdk_bdev_nvme.so.7.1 00:05:02.122 SYMLINK libspdk_bdev_nvme.so 00:05:02.427 CC module/event/subsystems/vmd/vmd.o 00:05:02.427 CC module/event/subsystems/iobuf/iobuf.o 00:05:02.427 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:02.427 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:02.427 CC module/event/subsystems/fsdev/fsdev.o 00:05:02.427 CC module/event/subsystems/sock/sock.o 00:05:02.427 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:02.427 CC module/event/subsystems/keyring/keyring.o 00:05:02.427 CC module/event/subsystems/scheduler/scheduler.o 00:05:02.685 LIB libspdk_event_keyring.a 00:05:02.685 LIB libspdk_event_vhost_blk.a 00:05:02.685 LIB libspdk_event_fsdev.a 00:05:02.685 LIB libspdk_event_vmd.a 00:05:02.685 LIB libspdk_event_sock.a 00:05:02.685 LIB libspdk_event_scheduler.a 00:05:02.685 LIB libspdk_event_iobuf.a 00:05:02.685 SO libspdk_event_keyring.so.1.0 00:05:02.685 SO libspdk_event_fsdev.so.1.0 00:05:02.685 SO libspdk_event_vhost_blk.so.3.0 00:05:02.685 SO libspdk_event_sock.so.5.0 00:05:02.685 SO libspdk_event_scheduler.so.4.0 00:05:02.685 SO libspdk_event_vmd.so.6.0 00:05:02.685 SO libspdk_event_iobuf.so.3.0 00:05:02.685 SYMLINK libspdk_event_fsdev.so 00:05:02.685 SYMLINK libspdk_event_keyring.so 00:05:02.685 SYMLINK libspdk_event_vhost_blk.so 00:05:02.685 SYMLINK libspdk_event_sock.so 00:05:02.685 SYMLINK libspdk_event_scheduler.so 00:05:02.685 SYMLINK libspdk_event_iobuf.so 00:05:02.685 SYMLINK libspdk_event_vmd.so 00:05:02.945 CC module/event/subsystems/accel/accel.o 00:05:03.203 LIB libspdk_event_accel.a 00:05:03.203 SO libspdk_event_accel.so.6.0 00:05:03.203 SYMLINK libspdk_event_accel.so 00:05:03.462 CC module/event/subsystems/bdev/bdev.o 00:05:03.462 LIB libspdk_event_bdev.a 00:05:03.462 SO libspdk_event_bdev.so.6.0 00:05:03.721 SYMLINK libspdk_event_bdev.so 00:05:03.721 CC module/event/subsystems/scsi/scsi.o 00:05:03.721 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:03.721 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:03.721 CC module/event/subsystems/ublk/ublk.o 00:05:03.721 CC module/event/subsystems/nbd/nbd.o 00:05:03.979 LIB libspdk_event_scsi.a 00:05:03.979 SO libspdk_event_scsi.so.6.0 00:05:03.979 LIB libspdk_event_ublk.a 00:05:03.979 LIB libspdk_event_nbd.a 00:05:03.979 SO libspdk_event_ublk.so.3.0 00:05:03.979 LIB libspdk_event_nvmf.a 00:05:03.979 SYMLINK libspdk_event_scsi.so 00:05:03.979 SO libspdk_event_nbd.so.6.0 00:05:03.979 SO libspdk_event_nvmf.so.6.0 00:05:03.979 SYMLINK libspdk_event_ublk.so 00:05:03.979 SYMLINK libspdk_event_nbd.so 00:05:03.979 SYMLINK libspdk_event_nvmf.so 00:05:04.238 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:04.238 CC module/event/subsystems/iscsi/iscsi.o 00:05:04.238 LIB libspdk_event_vhost_scsi.a 00:05:04.238 SO libspdk_event_vhost_scsi.so.3.0 00:05:04.238 LIB libspdk_event_iscsi.a 00:05:04.238 SO libspdk_event_iscsi.so.6.0 00:05:04.238 SYMLINK libspdk_event_vhost_scsi.so 00:05:04.497 SYMLINK libspdk_event_iscsi.so 00:05:04.497 SO libspdk.so.6.0 00:05:04.497 SYMLINK libspdk.so 00:05:04.756 CC app/trace_record/trace_record.o 00:05:04.756 CXX app/trace/trace.o 00:05:04.756 TEST_HEADER include/spdk/accel.h 00:05:04.756 TEST_HEADER include/spdk/accel_module.h 00:05:04.756 TEST_HEADER include/spdk/assert.h 00:05:04.756 TEST_HEADER include/spdk/barrier.h 00:05:04.756 TEST_HEADER include/spdk/base64.h 00:05:04.756 TEST_HEADER include/spdk/bdev.h 00:05:04.756 TEST_HEADER include/spdk/bdev_module.h 00:05:04.756 TEST_HEADER include/spdk/bdev_zone.h 00:05:04.756 TEST_HEADER include/spdk/bit_array.h 00:05:04.756 TEST_HEADER include/spdk/bit_pool.h 00:05:04.756 TEST_HEADER include/spdk/blob_bdev.h 00:05:04.756 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:04.756 TEST_HEADER include/spdk/blobfs.h 00:05:04.756 TEST_HEADER include/spdk/blob.h 00:05:04.756 TEST_HEADER include/spdk/conf.h 00:05:04.756 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:04.756 TEST_HEADER include/spdk/config.h 00:05:04.756 TEST_HEADER include/spdk/cpuset.h 00:05:04.756 TEST_HEADER include/spdk/crc16.h 00:05:04.756 TEST_HEADER include/spdk/crc32.h 00:05:04.756 TEST_HEADER include/spdk/crc64.h 00:05:04.756 TEST_HEADER include/spdk/dif.h 00:05:04.756 CC app/nvmf_tgt/nvmf_main.o 00:05:04.756 TEST_HEADER include/spdk/dma.h 00:05:04.756 TEST_HEADER include/spdk/endian.h 00:05:04.756 TEST_HEADER include/spdk/env_dpdk.h 00:05:04.756 TEST_HEADER include/spdk/env.h 00:05:04.756 TEST_HEADER include/spdk/event.h 00:05:04.756 TEST_HEADER include/spdk/fd_group.h 00:05:04.756 TEST_HEADER include/spdk/fd.h 00:05:04.756 CC examples/util/zipf/zipf.o 00:05:04.756 TEST_HEADER include/spdk/file.h 00:05:04.756 TEST_HEADER include/spdk/fsdev.h 00:05:04.756 CC examples/ioat/perf/perf.o 00:05:04.756 TEST_HEADER include/spdk/fsdev_module.h 00:05:04.756 TEST_HEADER include/spdk/ftl.h 00:05:04.756 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:04.756 TEST_HEADER include/spdk/gpt_spec.h 00:05:04.756 TEST_HEADER include/spdk/hexlify.h 00:05:04.756 TEST_HEADER include/spdk/histogram_data.h 00:05:04.756 TEST_HEADER include/spdk/idxd.h 00:05:04.756 TEST_HEADER include/spdk/idxd_spec.h 00:05:04.756 TEST_HEADER include/spdk/init.h 00:05:04.756 TEST_HEADER include/spdk/ioat.h 00:05:04.756 CC test/thread/poller_perf/poller_perf.o 00:05:04.756 TEST_HEADER include/spdk/ioat_spec.h 00:05:04.756 TEST_HEADER include/spdk/iscsi_spec.h 00:05:04.756 TEST_HEADER include/spdk/json.h 00:05:04.756 TEST_HEADER include/spdk/jsonrpc.h 00:05:04.756 TEST_HEADER include/spdk/keyring.h 00:05:04.756 TEST_HEADER include/spdk/keyring_module.h 00:05:04.756 TEST_HEADER include/spdk/likely.h 00:05:04.756 TEST_HEADER include/spdk/log.h 00:05:04.756 TEST_HEADER include/spdk/lvol.h 00:05:04.756 TEST_HEADER include/spdk/md5.h 00:05:04.756 TEST_HEADER include/spdk/memory.h 00:05:04.756 CC test/dma/test_dma/test_dma.o 00:05:04.756 TEST_HEADER include/spdk/mmio.h 00:05:04.756 TEST_HEADER include/spdk/nbd.h 00:05:04.756 TEST_HEADER include/spdk/net.h 00:05:04.756 TEST_HEADER include/spdk/notify.h 00:05:04.756 TEST_HEADER include/spdk/nvme.h 00:05:04.756 CC test/app/bdev_svc/bdev_svc.o 00:05:04.756 TEST_HEADER include/spdk/nvme_intel.h 00:05:04.756 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:04.756 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:04.756 TEST_HEADER include/spdk/nvme_spec.h 00:05:04.756 TEST_HEADER include/spdk/nvme_zns.h 00:05:04.756 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:04.756 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:04.756 TEST_HEADER include/spdk/nvmf.h 00:05:04.756 TEST_HEADER include/spdk/nvmf_spec.h 00:05:04.756 TEST_HEADER include/spdk/nvmf_transport.h 00:05:04.756 TEST_HEADER include/spdk/opal.h 00:05:04.756 TEST_HEADER include/spdk/opal_spec.h 00:05:04.756 TEST_HEADER include/spdk/pci_ids.h 00:05:04.756 TEST_HEADER include/spdk/pipe.h 00:05:04.756 TEST_HEADER include/spdk/queue.h 00:05:04.756 TEST_HEADER include/spdk/reduce.h 00:05:04.756 TEST_HEADER include/spdk/rpc.h 00:05:04.756 TEST_HEADER include/spdk/scheduler.h 00:05:04.756 TEST_HEADER include/spdk/scsi.h 00:05:04.756 TEST_HEADER include/spdk/scsi_spec.h 00:05:04.756 TEST_HEADER include/spdk/sock.h 00:05:04.756 TEST_HEADER include/spdk/stdinc.h 00:05:04.756 TEST_HEADER include/spdk/string.h 00:05:04.756 TEST_HEADER include/spdk/thread.h 00:05:04.756 TEST_HEADER include/spdk/trace.h 00:05:04.756 TEST_HEADER include/spdk/trace_parser.h 00:05:04.756 TEST_HEADER include/spdk/tree.h 00:05:04.756 TEST_HEADER include/spdk/ublk.h 00:05:04.756 TEST_HEADER include/spdk/util.h 00:05:04.756 TEST_HEADER include/spdk/uuid.h 00:05:04.756 TEST_HEADER include/spdk/version.h 00:05:04.756 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:04.756 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:04.756 TEST_HEADER include/spdk/vhost.h 00:05:04.756 TEST_HEADER include/spdk/vmd.h 00:05:04.756 TEST_HEADER include/spdk/xor.h 00:05:04.756 TEST_HEADER include/spdk/zipf.h 00:05:04.756 CXX test/cpp_headers/accel.o 00:05:05.014 LINK interrupt_tgt 00:05:05.014 LINK nvmf_tgt 00:05:05.014 LINK poller_perf 00:05:05.014 LINK spdk_trace_record 00:05:05.014 LINK zipf 00:05:05.014 LINK ioat_perf 00:05:05.014 LINK bdev_svc 00:05:05.014 CXX test/cpp_headers/accel_module.o 00:05:05.014 LINK spdk_trace 00:05:05.272 CXX test/cpp_headers/assert.o 00:05:05.272 CC examples/ioat/verify/verify.o 00:05:05.272 CC app/iscsi_tgt/iscsi_tgt.o 00:05:05.272 CC test/event/event_perf/event_perf.o 00:05:05.272 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:05.272 LINK test_dma 00:05:05.272 CC test/env/mem_callbacks/mem_callbacks.o 00:05:05.272 CC test/env/vtophys/vtophys.o 00:05:05.272 CC app/spdk_tgt/spdk_tgt.o 00:05:05.272 CXX test/cpp_headers/barrier.o 00:05:05.272 CC examples/thread/thread/thread_ex.o 00:05:05.272 LINK event_perf 00:05:05.272 LINK verify 00:05:05.272 LINK iscsi_tgt 00:05:05.533 LINK vtophys 00:05:05.533 CXX test/cpp_headers/base64.o 00:05:05.533 CXX test/cpp_headers/bdev.o 00:05:05.533 LINK spdk_tgt 00:05:05.533 CC test/app/histogram_perf/histogram_perf.o 00:05:05.533 CC test/event/reactor/reactor.o 00:05:05.533 CC test/app/jsoncat/jsoncat.o 00:05:05.533 LINK thread 00:05:05.533 CC test/app/stub/stub.o 00:05:05.533 CXX test/cpp_headers/bdev_module.o 00:05:05.794 LINK nvme_fuzz 00:05:05.794 CXX test/cpp_headers/bdev_zone.o 00:05:05.794 LINK histogram_perf 00:05:05.794 LINK reactor 00:05:05.794 CC app/spdk_lspci/spdk_lspci.o 00:05:05.794 LINK jsoncat 00:05:05.794 LINK stub 00:05:05.794 LINK mem_callbacks 00:05:05.794 CXX test/cpp_headers/bit_array.o 00:05:05.794 LINK spdk_lspci 00:05:06.055 CC test/event/reactor_perf/reactor_perf.o 00:05:06.055 CC test/event/app_repeat/app_repeat.o 00:05:06.055 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:06.055 CC examples/sock/hello_world/hello_sock.o 00:05:06.055 CC examples/vmd/lsvmd/lsvmd.o 00:05:06.055 CXX test/cpp_headers/bit_pool.o 00:05:06.055 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:06.055 CC test/rpc_client/rpc_client_test.o 00:05:06.055 LINK reactor_perf 00:05:06.055 CC app/spdk_nvme_perf/perf.o 00:05:06.055 LINK app_repeat 00:05:06.055 LINK lsvmd 00:05:06.315 CXX test/cpp_headers/blob_bdev.o 00:05:06.315 LINK env_dpdk_post_init 00:05:06.315 CC test/accel/dif/dif.o 00:05:06.315 LINK rpc_client_test 00:05:06.315 LINK hello_sock 00:05:06.315 CC test/event/scheduler/scheduler.o 00:05:06.315 CC app/spdk_nvme_identify/identify.o 00:05:06.315 CXX test/cpp_headers/blobfs_bdev.o 00:05:06.315 CC examples/vmd/led/led.o 00:05:06.574 CXX test/cpp_headers/blobfs.o 00:05:06.574 CC test/env/memory/memory_ut.o 00:05:06.574 CC test/env/pci/pci_ut.o 00:05:06.574 LINK led 00:05:06.574 LINK scheduler 00:05:06.574 CXX test/cpp_headers/blob.o 00:05:06.833 CC examples/idxd/perf/perf.o 00:05:06.833 CC app/spdk_nvme_discover/discovery_aer.o 00:05:06.833 CXX test/cpp_headers/conf.o 00:05:06.833 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:06.833 LINK pci_ut 00:05:06.833 CXX test/cpp_headers/config.o 00:05:06.833 CXX test/cpp_headers/cpuset.o 00:05:06.833 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:06.833 LINK spdk_nvme_discover 00:05:07.091 LINK dif 00:05:07.091 LINK spdk_nvme_perf 00:05:07.091 CXX test/cpp_headers/crc16.o 00:05:07.091 LINK idxd_perf 00:05:07.091 CXX test/cpp_headers/crc32.o 00:05:07.091 CXX test/cpp_headers/crc64.o 00:05:07.091 CXX test/cpp_headers/dif.o 00:05:07.091 LINK spdk_nvme_identify 00:05:07.350 CC app/spdk_top/spdk_top.o 00:05:07.350 CXX test/cpp_headers/dma.o 00:05:07.350 LINK vhost_fuzz 00:05:07.350 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:07.350 CXX test/cpp_headers/endian.o 00:05:07.350 CC test/blobfs/mkfs/mkfs.o 00:05:07.350 CC examples/accel/perf/accel_perf.o 00:05:07.350 CC test/lvol/esnap/esnap.o 00:05:07.350 CXX test/cpp_headers/env_dpdk.o 00:05:07.350 CC examples/blob/hello_world/hello_blob.o 00:05:07.610 LINK iscsi_fuzz 00:05:07.611 LINK mkfs 00:05:07.611 LINK memory_ut 00:05:07.611 CXX test/cpp_headers/env.o 00:05:07.611 LINK hello_fsdev 00:05:07.611 LINK hello_blob 00:05:07.611 CC examples/nvme/hello_world/hello_world.o 00:05:07.611 CC examples/nvme/reconnect/reconnect.o 00:05:07.611 CXX test/cpp_headers/event.o 00:05:07.871 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:07.871 CC examples/blob/cli/blobcli.o 00:05:07.871 CXX test/cpp_headers/fd_group.o 00:05:07.871 LINK hello_world 00:05:07.871 CC app/vhost/vhost.o 00:05:07.871 LINK accel_perf 00:05:07.871 CC app/spdk_dd/spdk_dd.o 00:05:08.132 CXX test/cpp_headers/fd.o 00:05:08.132 LINK reconnect 00:05:08.132 CXX test/cpp_headers/file.o 00:05:08.132 LINK vhost 00:05:08.132 CXX test/cpp_headers/fsdev.o 00:05:08.132 LINK spdk_top 00:05:08.391 CXX test/cpp_headers/fsdev_module.o 00:05:08.391 CC examples/nvme/hotplug/hotplug.o 00:05:08.391 LINK spdk_dd 00:05:08.391 CC examples/nvme/arbitration/arbitration.o 00:05:08.391 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:08.391 LINK nvme_manage 00:05:08.391 LINK blobcli 00:05:08.391 CC app/fio/nvme/fio_plugin.o 00:05:08.391 CXX test/cpp_headers/ftl.o 00:05:08.391 CXX test/cpp_headers/fuse_dispatcher.o 00:05:08.391 CC examples/bdev/hello_world/hello_bdev.o 00:05:08.391 LINK cmb_copy 00:05:08.391 LINK hotplug 00:05:08.391 CXX test/cpp_headers/gpt_spec.o 00:05:08.650 CXX test/cpp_headers/hexlify.o 00:05:08.650 LINK arbitration 00:05:08.650 CXX test/cpp_headers/histogram_data.o 00:05:08.650 LINK hello_bdev 00:05:08.650 CC examples/nvme/abort/abort.o 00:05:08.650 CXX test/cpp_headers/idxd.o 00:05:08.650 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:08.650 CC app/fio/bdev/fio_plugin.o 00:05:08.650 CXX test/cpp_headers/idxd_spec.o 00:05:08.650 CC examples/bdev/bdevperf/bdevperf.o 00:05:08.908 LINK pmr_persistence 00:05:08.908 CXX test/cpp_headers/init.o 00:05:08.908 LINK spdk_nvme 00:05:08.908 CXX test/cpp_headers/ioat.o 00:05:08.908 CC test/nvme/aer/aer.o 00:05:08.908 CXX test/cpp_headers/ioat_spec.o 00:05:08.908 CC test/bdev/bdevio/bdevio.o 00:05:08.908 CXX test/cpp_headers/iscsi_spec.o 00:05:08.908 CXX test/cpp_headers/json.o 00:05:08.908 CXX test/cpp_headers/jsonrpc.o 00:05:08.908 CXX test/cpp_headers/keyring.o 00:05:08.908 LINK abort 00:05:09.167 CXX test/cpp_headers/keyring_module.o 00:05:09.167 CXX test/cpp_headers/likely.o 00:05:09.167 CXX test/cpp_headers/log.o 00:05:09.167 CXX test/cpp_headers/lvol.o 00:05:09.167 LINK aer 00:05:09.167 CC test/nvme/reset/reset.o 00:05:09.167 LINK spdk_bdev 00:05:09.167 LINK bdevio 00:05:09.167 CXX test/cpp_headers/md5.o 00:05:09.167 CXX test/cpp_headers/memory.o 00:05:09.425 CC test/nvme/e2edp/nvme_dp.o 00:05:09.425 CC test/nvme/sgl/sgl.o 00:05:09.425 CXX test/cpp_headers/mmio.o 00:05:09.425 CC test/nvme/overhead/overhead.o 00:05:09.425 CC test/nvme/err_injection/err_injection.o 00:05:09.425 LINK reset 00:05:09.425 CC test/nvme/startup/startup.o 00:05:09.425 CXX test/cpp_headers/nbd.o 00:05:09.425 CXX test/cpp_headers/net.o 00:05:09.425 CC test/nvme/reserve/reserve.o 00:05:09.425 LINK err_injection 00:05:09.425 LINK sgl 00:05:09.425 LINK bdevperf 00:05:09.684 LINK nvme_dp 00:05:09.684 LINK startup 00:05:09.684 CXX test/cpp_headers/notify.o 00:05:09.684 CC test/nvme/simple_copy/simple_copy.o 00:05:09.684 LINK overhead 00:05:09.684 LINK reserve 00:05:09.684 CC test/nvme/connect_stress/connect_stress.o 00:05:09.684 CC test/nvme/boot_partition/boot_partition.o 00:05:09.684 CXX test/cpp_headers/nvme.o 00:05:09.684 CC test/nvme/compliance/nvme_compliance.o 00:05:09.684 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:09.684 CC test/nvme/fused_ordering/fused_ordering.o 00:05:09.941 LINK boot_partition 00:05:09.941 CXX test/cpp_headers/nvme_intel.o 00:05:09.941 LINK simple_copy 00:05:09.941 CC test/nvme/fdp/fdp.o 00:05:09.941 LINK connect_stress 00:05:09.941 CC examples/nvmf/nvmf/nvmf.o 00:05:09.941 CXX test/cpp_headers/nvme_ocssd.o 00:05:09.941 LINK doorbell_aers 00:05:09.942 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:09.942 CXX test/cpp_headers/nvme_spec.o 00:05:09.942 LINK fused_ordering 00:05:09.942 CC test/nvme/cuse/cuse.o 00:05:09.942 CXX test/cpp_headers/nvme_zns.o 00:05:09.942 CXX test/cpp_headers/nvmf_cmd.o 00:05:10.201 LINK nvmf 00:05:10.201 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:10.201 LINK nvme_compliance 00:05:10.201 CXX test/cpp_headers/nvmf.o 00:05:10.201 CXX test/cpp_headers/nvmf_spec.o 00:05:10.201 CXX test/cpp_headers/nvmf_transport.o 00:05:10.201 LINK fdp 00:05:10.201 CXX test/cpp_headers/opal.o 00:05:10.201 CXX test/cpp_headers/opal_spec.o 00:05:10.201 CXX test/cpp_headers/pci_ids.o 00:05:10.201 CXX test/cpp_headers/pipe.o 00:05:10.201 CXX test/cpp_headers/queue.o 00:05:10.201 CXX test/cpp_headers/reduce.o 00:05:10.201 CXX test/cpp_headers/rpc.o 00:05:10.201 CXX test/cpp_headers/scheduler.o 00:05:10.201 CXX test/cpp_headers/scsi.o 00:05:10.201 CXX test/cpp_headers/scsi_spec.o 00:05:10.465 CXX test/cpp_headers/sock.o 00:05:10.465 CXX test/cpp_headers/stdinc.o 00:05:10.465 CXX test/cpp_headers/string.o 00:05:10.465 CXX test/cpp_headers/thread.o 00:05:10.465 CXX test/cpp_headers/trace.o 00:05:10.465 CXX test/cpp_headers/trace_parser.o 00:05:10.465 CXX test/cpp_headers/tree.o 00:05:10.465 CXX test/cpp_headers/ublk.o 00:05:10.465 CXX test/cpp_headers/util.o 00:05:10.465 CXX test/cpp_headers/uuid.o 00:05:10.465 CXX test/cpp_headers/version.o 00:05:10.465 CXX test/cpp_headers/vfio_user_pci.o 00:05:10.465 CXX test/cpp_headers/vfio_user_spec.o 00:05:10.465 CXX test/cpp_headers/vhost.o 00:05:10.465 CXX test/cpp_headers/vmd.o 00:05:10.465 CXX test/cpp_headers/xor.o 00:05:10.465 CXX test/cpp_headers/zipf.o 00:05:11.033 LINK cuse 00:05:12.420 LINK esnap 00:05:12.679 00:05:12.679 real 1m3.583s 00:05:12.679 user 5m4.019s 00:05:12.679 sys 0m55.562s 00:05:12.679 17:39:32 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:12.679 17:39:32 make -- common/autotest_common.sh@10 -- $ set +x 00:05:12.679 ************************************ 00:05:12.679 END TEST make 00:05:12.679 ************************************ 00:05:12.941 17:39:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:12.941 17:39:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:12.941 17:39:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:12.941 17:39:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.941 17:39:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:12.941 17:39:32 -- pm/common@44 -- $ pid=5811 00:05:12.941 17:39:32 -- pm/common@50 -- $ kill -TERM 5811 00:05:12.941 17:39:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.941 17:39:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:12.941 17:39:32 -- pm/common@44 -- $ pid=5812 00:05:12.941 17:39:32 -- pm/common@50 -- $ kill -TERM 5812 00:05:12.941 17:39:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:12.941 17:39:32 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:12.941 17:39:32 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.941 17:39:32 -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.941 17:39:32 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.941 17:39:32 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.941 17:39:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.941 17:39:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.941 17:39:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.941 17:39:32 -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.941 17:39:32 -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.941 17:39:32 -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.941 17:39:32 -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.941 17:39:32 -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.941 17:39:32 -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.941 17:39:32 -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.941 17:39:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.941 17:39:32 -- scripts/common.sh@344 -- # case "$op" in 00:05:12.941 17:39:32 -- scripts/common.sh@345 -- # : 1 00:05:12.941 17:39:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.941 17:39:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.941 17:39:32 -- scripts/common.sh@365 -- # decimal 1 00:05:12.941 17:39:32 -- scripts/common.sh@353 -- # local d=1 00:05:12.941 17:39:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.941 17:39:32 -- scripts/common.sh@355 -- # echo 1 00:05:12.941 17:39:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.941 17:39:32 -- scripts/common.sh@366 -- # decimal 2 00:05:12.941 17:39:32 -- scripts/common.sh@353 -- # local d=2 00:05:12.941 17:39:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.941 17:39:32 -- scripts/common.sh@355 -- # echo 2 00:05:12.941 17:39:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.941 17:39:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.941 17:39:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.941 17:39:32 -- scripts/common.sh@368 -- # return 0 00:05:12.941 17:39:32 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.941 17:39:32 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.941 --rc genhtml_branch_coverage=1 00:05:12.941 --rc genhtml_function_coverage=1 00:05:12.941 --rc genhtml_legend=1 00:05:12.941 --rc geninfo_all_blocks=1 00:05:12.941 --rc geninfo_unexecuted_blocks=1 00:05:12.941 00:05:12.941 ' 00:05:12.941 17:39:32 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.941 --rc genhtml_branch_coverage=1 00:05:12.941 --rc genhtml_function_coverage=1 00:05:12.941 --rc genhtml_legend=1 00:05:12.941 --rc geninfo_all_blocks=1 00:05:12.941 --rc geninfo_unexecuted_blocks=1 00:05:12.941 00:05:12.941 ' 00:05:12.941 17:39:32 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.941 --rc genhtml_branch_coverage=1 00:05:12.941 --rc genhtml_function_coverage=1 00:05:12.941 --rc genhtml_legend=1 00:05:12.941 --rc geninfo_all_blocks=1 00:05:12.941 --rc geninfo_unexecuted_blocks=1 00:05:12.941 00:05:12.941 ' 00:05:12.941 17:39:32 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.941 --rc genhtml_branch_coverage=1 00:05:12.941 --rc genhtml_function_coverage=1 00:05:12.941 --rc genhtml_legend=1 00:05:12.941 --rc geninfo_all_blocks=1 00:05:12.941 --rc geninfo_unexecuted_blocks=1 00:05:12.941 00:05:12.941 ' 00:05:12.941 17:39:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.941 17:39:32 -- nvmf/common.sh@7 -- # uname -s 00:05:12.941 17:39:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.941 17:39:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.941 17:39:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.941 17:39:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.941 17:39:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.941 17:39:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.941 17:39:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.941 17:39:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.941 17:39:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.941 17:39:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.941 17:39:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e9f1c60b-b6ca-439f-b9ad-653632c758e1 00:05:12.941 17:39:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=e9f1c60b-b6ca-439f-b9ad-653632c758e1 00:05:12.941 17:39:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.941 17:39:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.941 17:39:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.941 17:39:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.941 17:39:32 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.941 17:39:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.941 17:39:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.941 17:39:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.941 17:39:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.941 17:39:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.941 17:39:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.941 17:39:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.941 17:39:32 -- paths/export.sh@5 -- # export PATH 00:05:12.941 17:39:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.941 17:39:32 -- nvmf/common.sh@51 -- # : 0 00:05:12.941 17:39:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.941 17:39:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.941 17:39:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.941 17:39:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.941 17:39:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.941 17:39:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.941 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.941 17:39:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.941 17:39:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.941 17:39:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.941 17:39:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:12.941 17:39:32 -- spdk/autotest.sh@32 -- # uname -s 00:05:12.941 17:39:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:12.941 17:39:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:12.941 17:39:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:12.941 17:39:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:12.941 17:39:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:12.941 17:39:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:13.201 17:39:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:13.201 17:39:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:13.201 17:39:32 -- spdk/autotest.sh@48 -- # udevadm_pid=67673 00:05:13.201 17:39:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:13.201 17:39:32 -- pm/common@17 -- # local monitor 00:05:13.201 17:39:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.201 17:39:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:13.201 17:39:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:13.201 17:39:32 -- pm/common@25 -- # sleep 1 00:05:13.201 17:39:32 -- pm/common@21 -- # date +%s 00:05:13.201 17:39:32 -- pm/common@21 -- # date +%s 00:05:13.201 17:39:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730828372 00:05:13.201 17:39:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730828372 00:05:13.201 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730828372_collect-vmstat.pm.log 00:05:13.201 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730828372_collect-cpu-load.pm.log 00:05:14.144 17:39:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:14.144 17:39:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:14.144 17:39:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.144 17:39:33 -- common/autotest_common.sh@10 -- # set +x 00:05:14.144 17:39:33 -- spdk/autotest.sh@59 -- # create_test_list 00:05:14.144 17:39:33 -- common/autotest_common.sh@750 -- # xtrace_disable 00:05:14.144 17:39:33 -- common/autotest_common.sh@10 -- # set +x 00:05:14.144 17:39:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:14.144 17:39:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:14.144 17:39:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:14.144 17:39:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:14.144 17:39:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:14.144 17:39:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:14.144 17:39:34 -- common/autotest_common.sh@1455 -- # uname 00:05:14.144 17:39:34 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:14.144 17:39:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:14.144 17:39:34 -- common/autotest_common.sh@1475 -- # uname 00:05:14.144 17:39:34 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:14.144 17:39:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:14.144 17:39:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:14.144 lcov: LCOV version 1.15 00:05:14.144 17:39:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:29.054 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:29.054 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:47.215 17:40:04 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:47.215 17:40:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.215 17:40:04 -- common/autotest_common.sh@10 -- # set +x 00:05:47.215 17:40:04 -- spdk/autotest.sh@78 -- # rm -f 00:05:47.215 17:40:04 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:47.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.215 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:47.215 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:47.215 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:47.215 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:47.215 17:40:05 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:47.215 17:40:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:47.215 17:40:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:47.215 17:40:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:47.215 17:40:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:47.215 17:40:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:47.215 17:40:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:47.215 17:40:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:47.215 17:40:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:47.215 17:40:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:47.215 17:40:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:47.215 17:40:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:47.215 17:40:05 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:47.215 17:40:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:47.215 17:40:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:47.215 17:40:05 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:47.215 17:40:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:47.215 17:40:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:47.215 17:40:05 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:47.215 17:40:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:47.215 17:40:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:47.215 17:40:05 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:47.215 17:40:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:47.215 17:40:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:47.215 17:40:05 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:47.215 17:40:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:47.215 17:40:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:47.215 17:40:05 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:47.215 17:40:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:47.215 17:40:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:47.215 17:40:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:47.215 17:40:05 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:47.215 17:40:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:47.215 No valid GPT data, bailing 00:05:47.215 17:40:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:47.215 17:40:05 -- scripts/common.sh@394 -- # pt= 00:05:47.215 17:40:05 -- scripts/common.sh@395 -- # return 1 00:05:47.215 17:40:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:47.215 1+0 records in 00:05:47.215 1+0 records out 00:05:47.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348949 s, 30.0 MB/s 00:05:47.215 17:40:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:47.215 17:40:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:47.215 17:40:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:47.215 17:40:05 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:47.215 17:40:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:47.215 No valid GPT data, bailing 00:05:47.215 17:40:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:47.215 17:40:05 -- scripts/common.sh@394 -- # pt= 00:05:47.215 17:40:05 -- scripts/common.sh@395 -- # return 1 00:05:47.215 17:40:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:47.215 1+0 records in 00:05:47.215 1+0 records out 00:05:47.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00588146 s, 178 MB/s 00:05:47.215 17:40:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:47.215 17:40:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:47.215 17:40:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:47.215 17:40:05 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:47.215 17:40:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:47.215 No valid GPT data, bailing 00:05:47.215 17:40:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:47.215 17:40:05 -- scripts/common.sh@394 -- # pt= 00:05:47.215 17:40:05 -- scripts/common.sh@395 -- # return 1 00:05:47.215 17:40:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:47.215 1+0 records in 00:05:47.215 1+0 records out 00:05:47.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00663541 s, 158 MB/s 00:05:47.215 17:40:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:47.215 17:40:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:47.215 17:40:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:47.215 17:40:05 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:47.215 17:40:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:47.215 No valid GPT data, bailing 00:05:47.216 17:40:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:47.216 17:40:05 -- scripts/common.sh@394 -- # pt= 00:05:47.216 17:40:05 -- scripts/common.sh@395 -- # return 1 00:05:47.216 17:40:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:47.216 1+0 records in 00:05:47.216 1+0 records out 00:05:47.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639615 s, 164 MB/s 00:05:47.216 17:40:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:47.216 17:40:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:47.216 17:40:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:47.216 17:40:05 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:47.216 17:40:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:47.216 No valid GPT data, bailing 00:05:47.216 17:40:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:47.216 17:40:05 -- scripts/common.sh@394 -- # pt= 00:05:47.216 17:40:05 -- scripts/common.sh@395 -- # return 1 00:05:47.216 17:40:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:47.216 1+0 records in 00:05:47.216 1+0 records out 00:05:47.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669694 s, 157 MB/s 00:05:47.216 17:40:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:47.216 17:40:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:47.216 17:40:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:47.216 17:40:05 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:47.216 17:40:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:47.216 No valid GPT data, bailing 00:05:47.216 17:40:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:47.216 17:40:05 -- scripts/common.sh@394 -- # pt= 00:05:47.216 17:40:05 -- scripts/common.sh@395 -- # return 1 00:05:47.216 17:40:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:47.216 1+0 records in 00:05:47.216 1+0 records out 00:05:47.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551192 s, 190 MB/s 00:05:47.216 17:40:05 -- spdk/autotest.sh@105 -- # sync 00:05:47.216 17:40:05 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:47.216 17:40:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:47.216 17:40:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:47.789 17:40:07 -- spdk/autotest.sh@111 -- # uname -s 00:05:47.789 17:40:07 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:47.789 17:40:07 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:47.789 17:40:07 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:48.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.645 Hugepages 00:05:48.645 node hugesize free / total 00:05:48.645 node0 1048576kB 0 / 0 00:05:48.906 node0 2048kB 0 / 0 00:05:48.906 00:05:48.906 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:48.906 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:48.906 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:48.906 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:49.167 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:49.167 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:49.167 17:40:09 -- spdk/autotest.sh@117 -- # uname -s 00:05:49.167 17:40:09 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:49.167 17:40:09 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:49.167 17:40:09 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:49.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.315 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.315 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.315 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.315 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.315 17:40:10 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:51.258 17:40:11 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:51.258 17:40:11 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:51.258 17:40:11 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:51.258 17:40:11 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:51.258 17:40:11 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:51.258 17:40:11 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:51.258 17:40:11 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.258 17:40:11 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:51.258 17:40:11 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:51.518 17:40:11 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:51.518 17:40:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:51.518 17:40:11 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:51.778 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:51.778 Waiting for block devices as requested 00:05:52.039 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:52.039 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:52.039 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:52.301 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:57.592 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:57.592 17:40:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:57.592 17:40:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:57.592 17:40:17 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:57.592 17:40:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:57.592 17:40:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:57.592 17:40:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:57.592 17:40:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:57.592 17:40:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:57.592 17:40:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:57.592 17:40:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:57.592 17:40:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:57.592 17:40:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1541 -- # continue 00:05:57.592 17:40:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:57.592 17:40:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:57.592 17:40:17 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:57.592 17:40:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:57.592 17:40:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:57.592 17:40:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:57.592 17:40:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:57.592 17:40:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:57.592 17:40:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:57.592 17:40:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:57.592 17:40:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:57.592 17:40:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1541 -- # continue 00:05:57.592 17:40:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:57.592 17:40:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:57.592 17:40:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:57.592 17:40:17 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:57.592 17:40:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:57.592 17:40:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:57.592 17:40:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:57.592 17:40:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:57.592 17:40:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:57.592 17:40:17 -- common/autotest_common.sh@1541 -- # continue 00:05:57.592 17:40:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:57.593 17:40:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:57.593 17:40:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:57.593 17:40:17 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:57.593 17:40:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:57.593 17:40:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:57.593 17:40:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:57.593 17:40:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:57.593 17:40:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:57.593 17:40:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:57.593 17:40:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:57.593 17:40:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:57.593 17:40:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:57.593 17:40:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:57.593 17:40:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:57.593 17:40:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:57.593 17:40:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:57.593 17:40:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:57.593 17:40:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:57.593 17:40:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:57.593 17:40:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:57.593 17:40:17 -- common/autotest_common.sh@1541 -- # continue 00:05:57.593 17:40:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:57.593 17:40:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:57.593 17:40:17 -- common/autotest_common.sh@10 -- # set +x 00:05:57.593 17:40:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:57.593 17:40:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:57.593 17:40:17 -- common/autotest_common.sh@10 -- # set +x 00:05:57.593 17:40:17 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:57.854 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.426 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:58.426 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:58.426 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:58.426 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:58.426 17:40:18 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:58.426 17:40:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:58.426 17:40:18 -- common/autotest_common.sh@10 -- # set +x 00:05:58.688 17:40:18 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:58.688 17:40:18 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:58.688 17:40:18 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:58.688 17:40:18 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:58.688 17:40:18 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:58.688 17:40:18 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:58.688 17:40:18 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:58.688 17:40:18 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:58.688 17:40:18 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:58.688 17:40:18 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:58.688 17:40:18 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:58.688 17:40:18 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:58.688 17:40:18 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:58.688 17:40:18 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:58.688 17:40:18 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:58.688 17:40:18 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:58.688 17:40:18 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:58.688 17:40:18 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:58.688 17:40:18 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:58.688 17:40:18 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:58.688 17:40:18 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:58.688 17:40:18 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:58.689 17:40:18 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:58.689 17:40:18 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:58.689 17:40:18 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:58.689 17:40:18 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:58.689 17:40:18 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:58.689 17:40:18 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:58.689 17:40:18 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:58.689 17:40:18 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:58.689 17:40:18 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:58.689 17:40:18 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:58.689 17:40:18 -- common/autotest_common.sh@1570 -- # return 0 00:05:58.689 17:40:18 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:58.689 17:40:18 -- common/autotest_common.sh@1578 -- # return 0 00:05:58.689 17:40:18 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:58.689 17:40:18 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:58.689 17:40:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:58.689 17:40:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:58.689 17:40:18 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:58.689 17:40:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:58.689 17:40:18 -- common/autotest_common.sh@10 -- # set +x 00:05:58.689 17:40:18 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:58.689 17:40:18 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:58.689 17:40:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.689 17:40:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.689 17:40:18 -- common/autotest_common.sh@10 -- # set +x 00:05:58.689 ************************************ 00:05:58.689 START TEST env 00:05:58.689 ************************************ 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:58.689 * Looking for test storage... 00:05:58.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:58.689 17:40:18 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.689 17:40:18 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.689 17:40:18 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.689 17:40:18 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.689 17:40:18 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.689 17:40:18 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.689 17:40:18 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.689 17:40:18 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.689 17:40:18 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.689 17:40:18 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.689 17:40:18 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.689 17:40:18 env -- scripts/common.sh@344 -- # case "$op" in 00:05:58.689 17:40:18 env -- scripts/common.sh@345 -- # : 1 00:05:58.689 17:40:18 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.689 17:40:18 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.689 17:40:18 env -- scripts/common.sh@365 -- # decimal 1 00:05:58.689 17:40:18 env -- scripts/common.sh@353 -- # local d=1 00:05:58.689 17:40:18 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.689 17:40:18 env -- scripts/common.sh@355 -- # echo 1 00:05:58.689 17:40:18 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.689 17:40:18 env -- scripts/common.sh@366 -- # decimal 2 00:05:58.689 17:40:18 env -- scripts/common.sh@353 -- # local d=2 00:05:58.689 17:40:18 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.689 17:40:18 env -- scripts/common.sh@355 -- # echo 2 00:05:58.689 17:40:18 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.689 17:40:18 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.689 17:40:18 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.689 17:40:18 env -- scripts/common.sh@368 -- # return 0 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.689 --rc genhtml_branch_coverage=1 00:05:58.689 --rc genhtml_function_coverage=1 00:05:58.689 --rc genhtml_legend=1 00:05:58.689 --rc geninfo_all_blocks=1 00:05:58.689 --rc geninfo_unexecuted_blocks=1 00:05:58.689 00:05:58.689 ' 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.689 --rc genhtml_branch_coverage=1 00:05:58.689 --rc genhtml_function_coverage=1 00:05:58.689 --rc genhtml_legend=1 00:05:58.689 --rc geninfo_all_blocks=1 00:05:58.689 --rc geninfo_unexecuted_blocks=1 00:05:58.689 00:05:58.689 ' 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.689 --rc genhtml_branch_coverage=1 00:05:58.689 --rc genhtml_function_coverage=1 00:05:58.689 --rc genhtml_legend=1 00:05:58.689 --rc geninfo_all_blocks=1 00:05:58.689 --rc geninfo_unexecuted_blocks=1 00:05:58.689 00:05:58.689 ' 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.689 --rc genhtml_branch_coverage=1 00:05:58.689 --rc genhtml_function_coverage=1 00:05:58.689 --rc genhtml_legend=1 00:05:58.689 --rc geninfo_all_blocks=1 00:05:58.689 --rc geninfo_unexecuted_blocks=1 00:05:58.689 00:05:58.689 ' 00:05:58.689 17:40:18 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.689 17:40:18 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.689 17:40:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.950 ************************************ 00:05:58.950 START TEST env_memory 00:05:58.950 ************************************ 00:05:58.950 17:40:18 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:58.950 00:05:58.950 00:05:58.950 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.950 http://cunit.sourceforge.net/ 00:05:58.950 00:05:58.950 00:05:58.950 Suite: memory 00:05:58.950 Test: alloc and free memory map ...[2024-11-05 17:40:18.740028] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:58.950 passed 00:05:58.950 Test: mem map translation ...[2024-11-05 17:40:18.778592] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:58.950 [2024-11-05 17:40:18.778633] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:58.950 [2024-11-05 17:40:18.778691] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:58.950 [2024-11-05 17:40:18.778707] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:58.950 passed 00:05:58.950 Test: mem map registration ...[2024-11-05 17:40:18.846544] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:58.950 [2024-11-05 17:40:18.846581] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:58.950 passed 00:05:58.950 Test: mem map adjacent registrations ...passed 00:05:58.950 00:05:58.950 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.950 suites 1 1 n/a 0 0 00:05:58.950 tests 4 4 4 0 0 00:05:58.950 asserts 152 152 152 0 n/a 00:05:58.950 00:05:58.950 Elapsed time = 0.232 seconds 00:05:59.212 00:05:59.212 real 0m0.265s 00:05:59.212 user 0m0.242s 00:05:59.212 sys 0m0.017s 00:05:59.212 17:40:18 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.212 17:40:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:59.212 ************************************ 00:05:59.212 END TEST env_memory 00:05:59.212 ************************************ 00:05:59.212 17:40:18 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:59.212 17:40:18 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.212 17:40:18 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.212 17:40:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.212 ************************************ 00:05:59.212 START TEST env_vtophys 00:05:59.212 ************************************ 00:05:59.212 17:40:18 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:59.212 EAL: lib.eal log level changed from notice to debug 00:05:59.212 EAL: Detected lcore 0 as core 0 on socket 0 00:05:59.212 EAL: Detected lcore 1 as core 0 on socket 0 00:05:59.212 EAL: Detected lcore 2 as core 0 on socket 0 00:05:59.212 EAL: Detected lcore 3 as core 0 on socket 0 00:05:59.212 EAL: Detected lcore 4 as core 0 on socket 0 00:05:59.212 EAL: Detected lcore 5 as core 0 on socket 0 00:05:59.212 EAL: Detected lcore 6 as core 0 on socket 0 00:05:59.212 EAL: Detected lcore 7 as core 0 on socket 0 00:05:59.212 EAL: Detected lcore 8 as core 0 on socket 0 00:05:59.212 EAL: Detected lcore 9 as core 0 on socket 0 00:05:59.212 EAL: Maximum logical cores by configuration: 128 00:05:59.212 EAL: Detected CPU lcores: 10 00:05:59.212 EAL: Detected NUMA nodes: 1 00:05:59.212 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:05:59.212 EAL: Detected shared linkage of DPDK 00:05:59.212 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:05:59.212 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:05:59.212 EAL: Registered [vdev] bus. 00:05:59.212 EAL: bus.vdev log level changed from disabled to notice 00:05:59.212 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:05:59.212 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:05:59.212 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:59.212 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:59.212 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:05:59.212 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:05:59.212 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:05:59.212 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:05:59.212 EAL: No shared files mode enabled, IPC will be disabled 00:05:59.212 EAL: No shared files mode enabled, IPC is disabled 00:05:59.212 EAL: Selected IOVA mode 'PA' 00:05:59.212 EAL: Probing VFIO support... 00:05:59.212 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:59.212 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:59.212 EAL: Ask a virtual area of 0x2e000 bytes 00:05:59.212 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:59.212 EAL: Setting up physically contiguous memory... 00:05:59.212 EAL: Setting maximum number of open files to 524288 00:05:59.212 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:59.212 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:59.212 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.212 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:59.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.212 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.212 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:59.212 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:59.212 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.212 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:59.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.212 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.212 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:59.212 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:59.212 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.212 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:59.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.212 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.212 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:59.212 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:59.212 EAL: Ask a virtual area of 0x61000 bytes 00:05:59.212 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:59.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:59.212 EAL: Ask a virtual area of 0x400000000 bytes 00:05:59.212 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:59.212 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:59.212 EAL: Hugepages will be freed exactly as allocated. 00:05:59.212 EAL: No shared files mode enabled, IPC is disabled 00:05:59.212 EAL: No shared files mode enabled, IPC is disabled 00:05:59.212 EAL: TSC frequency is ~2600000 KHz 00:05:59.212 EAL: Main lcore 0 is ready (tid=7f3fb562fa40;cpuset=[0]) 00:05:59.212 EAL: Trying to obtain current memory policy. 00:05:59.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.212 EAL: Restoring previous memory policy: 0 00:05:59.212 EAL: request: mp_malloc_sync 00:05:59.212 EAL: No shared files mode enabled, IPC is disabled 00:05:59.212 EAL: Heap on socket 0 was expanded by 2MB 00:05:59.212 EAL: No shared files mode enabled, IPC is disabled 00:05:59.212 EAL: Mem event callback 'spdk:(nil)' registered 00:05:59.212 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:59.212 00:05:59.212 00:05:59.212 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.212 http://cunit.sourceforge.net/ 00:05:59.212 00:05:59.212 00:05:59.212 Suite: components_suite 00:05:59.784 Test: vtophys_malloc_test ...passed 00:05:59.784 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:59.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.784 EAL: Restoring previous memory policy: 4 00:05:59.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.784 EAL: request: mp_malloc_sync 00:05:59.784 EAL: No shared files mode enabled, IPC is disabled 00:05:59.784 EAL: Heap on socket 0 was expanded by 4MB 00:05:59.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.784 EAL: request: mp_malloc_sync 00:05:59.784 EAL: No shared files mode enabled, IPC is disabled 00:05:59.784 EAL: Heap on socket 0 was shrunk by 4MB 00:05:59.784 EAL: Trying to obtain current memory policy. 00:05:59.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.784 EAL: Restoring previous memory policy: 4 00:05:59.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.784 EAL: request: mp_malloc_sync 00:05:59.784 EAL: No shared files mode enabled, IPC is disabled 00:05:59.784 EAL: Heap on socket 0 was expanded by 6MB 00:05:59.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.784 EAL: request: mp_malloc_sync 00:05:59.784 EAL: No shared files mode enabled, IPC is disabled 00:05:59.784 EAL: Heap on socket 0 was shrunk by 6MB 00:05:59.784 EAL: Trying to obtain current memory policy. 00:05:59.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.784 EAL: Restoring previous memory policy: 4 00:05:59.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.784 EAL: request: mp_malloc_sync 00:05:59.784 EAL: No shared files mode enabled, IPC is disabled 00:05:59.784 EAL: Heap on socket 0 was expanded by 10MB 00:05:59.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.784 EAL: request: mp_malloc_sync 00:05:59.784 EAL: No shared files mode enabled, IPC is disabled 00:05:59.784 EAL: Heap on socket 0 was shrunk by 10MB 00:05:59.784 EAL: Trying to obtain current memory policy. 00:05:59.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.784 EAL: Restoring previous memory policy: 4 00:05:59.784 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.785 EAL: request: mp_malloc_sync 00:05:59.785 EAL: No shared files mode enabled, IPC is disabled 00:05:59.785 EAL: Heap on socket 0 was expanded by 18MB 00:05:59.785 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.785 EAL: request: mp_malloc_sync 00:05:59.785 EAL: No shared files mode enabled, IPC is disabled 00:05:59.785 EAL: Heap on socket 0 was shrunk by 18MB 00:05:59.785 EAL: Trying to obtain current memory policy. 00:05:59.785 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.785 EAL: Restoring previous memory policy: 4 00:05:59.785 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.785 EAL: request: mp_malloc_sync 00:05:59.785 EAL: No shared files mode enabled, IPC is disabled 00:05:59.785 EAL: Heap on socket 0 was expanded by 34MB 00:05:59.785 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.785 EAL: request: mp_malloc_sync 00:05:59.785 EAL: No shared files mode enabled, IPC is disabled 00:05:59.785 EAL: Heap on socket 0 was shrunk by 34MB 00:05:59.785 EAL: Trying to obtain current memory policy. 00:05:59.785 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.785 EAL: Restoring previous memory policy: 4 00:05:59.785 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.785 EAL: request: mp_malloc_sync 00:05:59.785 EAL: No shared files mode enabled, IPC is disabled 00:05:59.785 EAL: Heap on socket 0 was expanded by 66MB 00:05:59.785 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.785 EAL: request: mp_malloc_sync 00:05:59.785 EAL: No shared files mode enabled, IPC is disabled 00:05:59.785 EAL: Heap on socket 0 was shrunk by 66MB 00:05:59.785 EAL: Trying to obtain current memory policy. 00:05:59.785 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.785 EAL: Restoring previous memory policy: 4 00:05:59.785 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.785 EAL: request: mp_malloc_sync 00:05:59.785 EAL: No shared files mode enabled, IPC is disabled 00:05:59.785 EAL: Heap on socket 0 was expanded by 130MB 00:05:59.785 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.785 EAL: request: mp_malloc_sync 00:05:59.785 EAL: No shared files mode enabled, IPC is disabled 00:05:59.785 EAL: Heap on socket 0 was shrunk by 130MB 00:05:59.785 EAL: Trying to obtain current memory policy. 00:05:59.785 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.785 EAL: Restoring previous memory policy: 4 00:05:59.785 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.785 EAL: request: mp_malloc_sync 00:05:59.785 EAL: No shared files mode enabled, IPC is disabled 00:05:59.785 EAL: Heap on socket 0 was expanded by 258MB 00:05:59.785 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.785 EAL: request: mp_malloc_sync 00:05:59.785 EAL: No shared files mode enabled, IPC is disabled 00:05:59.785 EAL: Heap on socket 0 was shrunk by 258MB 00:05:59.785 EAL: Trying to obtain current memory policy. 00:05:59.785 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.047 EAL: Restoring previous memory policy: 4 00:06:00.047 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.047 EAL: request: mp_malloc_sync 00:06:00.047 EAL: No shared files mode enabled, IPC is disabled 00:06:00.047 EAL: Heap on socket 0 was expanded by 514MB 00:06:00.047 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.047 EAL: request: mp_malloc_sync 00:06:00.047 EAL: No shared files mode enabled, IPC is disabled 00:06:00.047 EAL: Heap on socket 0 was shrunk by 514MB 00:06:00.047 EAL: Trying to obtain current memory policy. 00:06:00.047 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:00.308 EAL: Restoring previous memory policy: 4 00:06:00.308 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.308 EAL: request: mp_malloc_sync 00:06:00.308 EAL: No shared files mode enabled, IPC is disabled 00:06:00.308 EAL: Heap on socket 0 was expanded by 1026MB 00:06:00.570 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.570 passed 00:06:00.570 00:06:00.570 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.570 suites 1 1 n/a 0 0 00:06:00.570 tests 2 2 2 0 0 00:06:00.570 asserts 5358 5358 5358 0 n/a 00:06:00.570 00:06:00.570 Elapsed time = 1.254 seconds 00:06:00.570 EAL: request: mp_malloc_sync 00:06:00.570 EAL: No shared files mode enabled, IPC is disabled 00:06:00.570 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:00.570 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.570 EAL: request: mp_malloc_sync 00:06:00.570 EAL: No shared files mode enabled, IPC is disabled 00:06:00.570 EAL: Heap on socket 0 was shrunk by 2MB 00:06:00.570 EAL: No shared files mode enabled, IPC is disabled 00:06:00.570 EAL: No shared files mode enabled, IPC is disabled 00:06:00.570 EAL: No shared files mode enabled, IPC is disabled 00:06:00.570 ************************************ 00:06:00.570 END TEST env_vtophys 00:06:00.570 ************************************ 00:06:00.570 00:06:00.570 real 0m1.499s 00:06:00.570 user 0m0.620s 00:06:00.570 sys 0m0.737s 00:06:00.570 17:40:20 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.570 17:40:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:00.570 17:40:20 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:00.570 17:40:20 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.570 17:40:20 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.570 17:40:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.570 ************************************ 00:06:00.570 START TEST env_pci 00:06:00.570 ************************************ 00:06:00.570 17:40:20 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:00.570 00:06:00.570 00:06:00.570 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.570 http://cunit.sourceforge.net/ 00:06:00.570 00:06:00.570 00:06:00.570 Suite: pci 00:06:00.570 Test: pci_hook ...[2024-11-05 17:40:20.557759] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70417 has claimed it 00:06:00.830 passed 00:06:00.830 00:06:00.830 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.830 suites 1 1 n/a 0 0 00:06:00.830 tests 1 1 1 0 0 00:06:00.830 asserts 25 25 25 0 n/a 00:06:00.830 00:06:00.830 Elapsed time = 0.006 seconds 00:06:00.830 EAL: Cannot find device (10000:00:01.0) 00:06:00.830 EAL: Failed to attach device on primary process 00:06:00.830 00:06:00.830 real 0m0.061s 00:06:00.830 user 0m0.027s 00:06:00.830 sys 0m0.033s 00:06:00.830 17:40:20 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.830 ************************************ 00:06:00.830 END TEST env_pci 00:06:00.830 17:40:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:00.830 ************************************ 00:06:00.830 17:40:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:00.830 17:40:20 env -- env/env.sh@15 -- # uname 00:06:00.830 17:40:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:00.830 17:40:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:00.830 17:40:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.830 17:40:20 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:00.830 17:40:20 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.830 17:40:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.830 ************************************ 00:06:00.830 START TEST env_dpdk_post_init 00:06:00.830 ************************************ 00:06:00.830 17:40:20 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:00.830 EAL: Detected CPU lcores: 10 00:06:00.830 EAL: Detected NUMA nodes: 1 00:06:00.830 EAL: Detected shared linkage of DPDK 00:06:00.830 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:00.830 EAL: Selected IOVA mode 'PA' 00:06:01.092 Starting DPDK initialization... 00:06:01.092 Starting SPDK post initialization... 00:06:01.092 SPDK NVMe probe 00:06:01.092 Attaching to 0000:00:10.0 00:06:01.092 Attaching to 0000:00:11.0 00:06:01.092 Attaching to 0000:00:12.0 00:06:01.092 Attaching to 0000:00:13.0 00:06:01.092 Attached to 0000:00:13.0 00:06:01.092 Attached to 0000:00:10.0 00:06:01.092 Attached to 0000:00:11.0 00:06:01.092 Attached to 0000:00:12.0 00:06:01.092 Cleaning up... 00:06:01.092 00:06:01.092 real 0m0.239s 00:06:01.092 user 0m0.066s 00:06:01.092 sys 0m0.074s 00:06:01.092 17:40:20 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.092 17:40:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.092 ************************************ 00:06:01.092 END TEST env_dpdk_post_init 00:06:01.092 ************************************ 00:06:01.092 17:40:20 env -- env/env.sh@26 -- # uname 00:06:01.092 17:40:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:01.092 17:40:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.092 17:40:20 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.092 17:40:20 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.092 17:40:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.092 ************************************ 00:06:01.092 START TEST env_mem_callbacks 00:06:01.092 ************************************ 00:06:01.092 17:40:20 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:01.092 EAL: Detected CPU lcores: 10 00:06:01.092 EAL: Detected NUMA nodes: 1 00:06:01.092 EAL: Detected shared linkage of DPDK 00:06:01.092 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:01.092 EAL: Selected IOVA mode 'PA' 00:06:01.092 00:06:01.092 00:06:01.092 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.092 http://cunit.sourceforge.net/ 00:06:01.092 00:06:01.092 00:06:01.092 Suite: memory 00:06:01.092 Test: test ... 00:06:01.092 register 0x200000200000 2097152 00:06:01.092 malloc 3145728 00:06:01.092 register 0x200000400000 4194304 00:06:01.092 buf 0x200000500000 len 3145728 PASSED 00:06:01.092 malloc 64 00:06:01.092 buf 0x2000004fff40 len 64 PASSED 00:06:01.092 malloc 4194304 00:06:01.092 register 0x200000800000 6291456 00:06:01.092 buf 0x200000a00000 len 4194304 PASSED 00:06:01.092 free 0x200000500000 3145728 00:06:01.092 free 0x2000004fff40 64 00:06:01.092 unregister 0x200000400000 4194304 PASSED 00:06:01.092 free 0x200000a00000 4194304 00:06:01.092 unregister 0x200000800000 6291456 PASSED 00:06:01.092 malloc 8388608 00:06:01.354 register 0x200000400000 10485760 00:06:01.354 buf 0x200000600000 len 8388608 PASSED 00:06:01.354 free 0x200000600000 8388608 00:06:01.354 unregister 0x200000400000 10485760 PASSED 00:06:01.354 passed 00:06:01.354 00:06:01.354 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.354 suites 1 1 n/a 0 0 00:06:01.354 tests 1 1 1 0 0 00:06:01.354 asserts 15 15 15 0 n/a 00:06:01.354 00:06:01.354 Elapsed time = 0.010 seconds 00:06:01.354 00:06:01.354 real 0m0.168s 00:06:01.354 user 0m0.026s 00:06:01.354 sys 0m0.040s 00:06:01.354 17:40:21 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.354 17:40:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:01.354 ************************************ 00:06:01.354 END TEST env_mem_callbacks 00:06:01.354 ************************************ 00:06:01.354 00:06:01.354 real 0m2.597s 00:06:01.354 user 0m1.131s 00:06:01.354 sys 0m1.109s 00:06:01.354 17:40:21 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.354 ************************************ 00:06:01.354 END TEST env 00:06:01.354 ************************************ 00:06:01.354 17:40:21 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.354 17:40:21 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:01.354 17:40:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.354 17:40:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.354 17:40:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.354 ************************************ 00:06:01.354 START TEST rpc 00:06:01.354 ************************************ 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:01.354 * Looking for test storage... 00:06:01.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:01.354 17:40:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.354 17:40:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.354 17:40:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.354 17:40:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.354 17:40:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.354 17:40:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.354 17:40:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.354 17:40:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.354 17:40:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.354 17:40:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.354 17:40:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.354 17:40:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:01.354 17:40:21 rpc -- scripts/common.sh@345 -- # : 1 00:06:01.354 17:40:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.354 17:40:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.354 17:40:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:01.354 17:40:21 rpc -- scripts/common.sh@353 -- # local d=1 00:06:01.354 17:40:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.354 17:40:21 rpc -- scripts/common.sh@355 -- # echo 1 00:06:01.354 17:40:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.354 17:40:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:01.354 17:40:21 rpc -- scripts/common.sh@353 -- # local d=2 00:06:01.354 17:40:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.354 17:40:21 rpc -- scripts/common.sh@355 -- # echo 2 00:06:01.354 17:40:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.354 17:40:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.354 17:40:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.354 17:40:21 rpc -- scripts/common.sh@368 -- # return 0 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:01.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.354 --rc genhtml_branch_coverage=1 00:06:01.354 --rc genhtml_function_coverage=1 00:06:01.354 --rc genhtml_legend=1 00:06:01.354 --rc geninfo_all_blocks=1 00:06:01.354 --rc geninfo_unexecuted_blocks=1 00:06:01.354 00:06:01.354 ' 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:01.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.354 --rc genhtml_branch_coverage=1 00:06:01.354 --rc genhtml_function_coverage=1 00:06:01.354 --rc genhtml_legend=1 00:06:01.354 --rc geninfo_all_blocks=1 00:06:01.354 --rc geninfo_unexecuted_blocks=1 00:06:01.354 00:06:01.354 ' 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:01.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.354 --rc genhtml_branch_coverage=1 00:06:01.354 --rc genhtml_function_coverage=1 00:06:01.354 --rc genhtml_legend=1 00:06:01.354 --rc geninfo_all_blocks=1 00:06:01.354 --rc geninfo_unexecuted_blocks=1 00:06:01.354 00:06:01.354 ' 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:01.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.354 --rc genhtml_branch_coverage=1 00:06:01.354 --rc genhtml_function_coverage=1 00:06:01.354 --rc genhtml_legend=1 00:06:01.354 --rc geninfo_all_blocks=1 00:06:01.354 --rc geninfo_unexecuted_blocks=1 00:06:01.354 00:06:01.354 ' 00:06:01.354 17:40:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70539 00:06:01.354 17:40:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.354 17:40:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70539 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@833 -- # '[' -z 70539 ']' 00:06:01.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:01.354 17:40:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.354 17:40:21 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:01.616 [2024-11-05 17:40:21.404431] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:01.616 [2024-11-05 17:40:21.404552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70539 ] 00:06:01.616 [2024-11-05 17:40:21.534777] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.616 [2024-11-05 17:40:21.564982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.616 [2024-11-05 17:40:21.589158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:01.616 [2024-11-05 17:40:21.589200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70539' to capture a snapshot of events at runtime. 00:06:01.616 [2024-11-05 17:40:21.589210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.616 [2024-11-05 17:40:21.589224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.616 [2024-11-05 17:40:21.589235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70539 for offline analysis/debug. 00:06:01.616 [2024-11-05 17:40:21.589579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.558 17:40:22 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:02.558 17:40:22 rpc -- common/autotest_common.sh@866 -- # return 0 00:06:02.558 17:40:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.558 17:40:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.558 17:40:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:02.558 17:40:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:02.558 17:40:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.558 17:40:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.558 17:40:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.558 ************************************ 00:06:02.558 START TEST rpc_integrity 00:06:02.558 ************************************ 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:02.558 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.558 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:02.558 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:02.558 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:02.558 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.558 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:02.558 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.558 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:02.558 { 00:06:02.558 "name": "Malloc0", 00:06:02.558 "aliases": [ 00:06:02.558 "d0f17ed1-538b-4340-a069-27deea727d43" 00:06:02.558 ], 00:06:02.558 "product_name": "Malloc disk", 00:06:02.558 "block_size": 512, 00:06:02.558 "num_blocks": 16384, 00:06:02.558 "uuid": "d0f17ed1-538b-4340-a069-27deea727d43", 00:06:02.558 "assigned_rate_limits": { 00:06:02.558 "rw_ios_per_sec": 0, 00:06:02.558 "rw_mbytes_per_sec": 0, 00:06:02.558 "r_mbytes_per_sec": 0, 00:06:02.558 "w_mbytes_per_sec": 0 00:06:02.558 }, 00:06:02.558 "claimed": false, 00:06:02.558 "zoned": false, 00:06:02.558 "supported_io_types": { 00:06:02.558 "read": true, 00:06:02.558 "write": true, 00:06:02.558 "unmap": true, 00:06:02.558 "flush": true, 00:06:02.558 "reset": true, 00:06:02.558 "nvme_admin": false, 00:06:02.558 "nvme_io": false, 00:06:02.558 "nvme_io_md": false, 00:06:02.558 "write_zeroes": true, 00:06:02.558 "zcopy": true, 00:06:02.558 "get_zone_info": false, 00:06:02.558 "zone_management": false, 00:06:02.558 "zone_append": false, 00:06:02.558 "compare": false, 00:06:02.558 "compare_and_write": false, 00:06:02.558 "abort": true, 00:06:02.558 "seek_hole": false, 00:06:02.558 "seek_data": false, 00:06:02.558 "copy": true, 00:06:02.558 "nvme_iov_md": false 00:06:02.558 }, 00:06:02.558 "memory_domains": [ 00:06:02.558 { 00:06:02.558 "dma_device_id": "system", 00:06:02.558 "dma_device_type": 1 00:06:02.558 }, 00:06:02.558 { 00:06:02.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.558 "dma_device_type": 2 00:06:02.558 } 00:06:02.558 ], 00:06:02.558 "driver_specific": {} 00:06:02.558 } 00:06:02.558 ]' 00:06:02.558 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:02.558 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:02.558 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.558 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.558 [2024-11-05 17:40:22.350574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:02.558 [2024-11-05 17:40:22.350632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:02.558 [2024-11-05 17:40:22.350654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:02.559 [2024-11-05 17:40:22.350666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:02.559 [2024-11-05 17:40:22.353034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:02.559 [2024-11-05 17:40:22.353084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:02.559 Passthru0 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.559 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.559 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:02.559 { 00:06:02.559 "name": "Malloc0", 00:06:02.559 "aliases": [ 00:06:02.559 "d0f17ed1-538b-4340-a069-27deea727d43" 00:06:02.559 ], 00:06:02.559 "product_name": "Malloc disk", 00:06:02.559 "block_size": 512, 00:06:02.559 "num_blocks": 16384, 00:06:02.559 "uuid": "d0f17ed1-538b-4340-a069-27deea727d43", 00:06:02.559 "assigned_rate_limits": { 00:06:02.559 "rw_ios_per_sec": 0, 00:06:02.559 "rw_mbytes_per_sec": 0, 00:06:02.559 "r_mbytes_per_sec": 0, 00:06:02.559 "w_mbytes_per_sec": 0 00:06:02.559 }, 00:06:02.559 "claimed": true, 00:06:02.559 "claim_type": "exclusive_write", 00:06:02.559 "zoned": false, 00:06:02.559 "supported_io_types": { 00:06:02.559 "read": true, 00:06:02.559 "write": true, 00:06:02.559 "unmap": true, 00:06:02.559 "flush": true, 00:06:02.559 "reset": true, 00:06:02.559 "nvme_admin": false, 00:06:02.559 "nvme_io": false, 00:06:02.559 "nvme_io_md": false, 00:06:02.559 "write_zeroes": true, 00:06:02.559 "zcopy": true, 00:06:02.559 "get_zone_info": false, 00:06:02.559 "zone_management": false, 00:06:02.559 "zone_append": false, 00:06:02.559 "compare": false, 00:06:02.559 "compare_and_write": false, 00:06:02.559 "abort": true, 00:06:02.559 "seek_hole": false, 00:06:02.559 "seek_data": false, 00:06:02.559 "copy": true, 00:06:02.559 "nvme_iov_md": false 00:06:02.559 }, 00:06:02.559 "memory_domains": [ 00:06:02.559 { 00:06:02.559 "dma_device_id": "system", 00:06:02.559 "dma_device_type": 1 00:06:02.559 }, 00:06:02.559 { 00:06:02.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.559 "dma_device_type": 2 00:06:02.559 } 00:06:02.559 ], 00:06:02.559 "driver_specific": {} 00:06:02.559 }, 00:06:02.559 { 00:06:02.559 "name": "Passthru0", 00:06:02.559 "aliases": [ 00:06:02.559 "e79ca882-a504-58f4-a113-553293296b29" 00:06:02.559 ], 00:06:02.559 "product_name": "passthru", 00:06:02.559 "block_size": 512, 00:06:02.559 "num_blocks": 16384, 00:06:02.559 "uuid": "e79ca882-a504-58f4-a113-553293296b29", 00:06:02.559 "assigned_rate_limits": { 00:06:02.559 "rw_ios_per_sec": 0, 00:06:02.559 "rw_mbytes_per_sec": 0, 00:06:02.559 "r_mbytes_per_sec": 0, 00:06:02.559 "w_mbytes_per_sec": 0 00:06:02.559 }, 00:06:02.559 "claimed": false, 00:06:02.559 "zoned": false, 00:06:02.559 "supported_io_types": { 00:06:02.559 "read": true, 00:06:02.559 "write": true, 00:06:02.559 "unmap": true, 00:06:02.559 "flush": true, 00:06:02.559 "reset": true, 00:06:02.559 "nvme_admin": false, 00:06:02.559 "nvme_io": false, 00:06:02.559 "nvme_io_md": false, 00:06:02.559 "write_zeroes": true, 00:06:02.559 "zcopy": true, 00:06:02.559 "get_zone_info": false, 00:06:02.559 "zone_management": false, 00:06:02.559 "zone_append": false, 00:06:02.559 "compare": false, 00:06:02.559 "compare_and_write": false, 00:06:02.559 "abort": true, 00:06:02.559 "seek_hole": false, 00:06:02.559 "seek_data": false, 00:06:02.559 "copy": true, 00:06:02.559 "nvme_iov_md": false 00:06:02.559 }, 00:06:02.559 "memory_domains": [ 00:06:02.559 { 00:06:02.559 "dma_device_id": "system", 00:06:02.559 "dma_device_type": 1 00:06:02.559 }, 00:06:02.559 { 00:06:02.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.559 "dma_device_type": 2 00:06:02.559 } 00:06:02.559 ], 00:06:02.559 "driver_specific": { 00:06:02.559 "passthru": { 00:06:02.559 "name": "Passthru0", 00:06:02.559 "base_bdev_name": "Malloc0" 00:06:02.559 } 00:06:02.559 } 00:06:02.559 } 00:06:02.559 ]' 00:06:02.559 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:02.559 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:02.559 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.559 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.559 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.559 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:02.559 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:02.559 17:40:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.559 00:06:02.559 real 0m0.234s 00:06:02.559 user 0m0.131s 00:06:02.559 sys 0m0.036s 00:06:02.559 ************************************ 00:06:02.559 END TEST rpc_integrity 00:06:02.559 ************************************ 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.559 17:40:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.559 17:40:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:02.559 17:40:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.559 17:40:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.559 17:40:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.559 ************************************ 00:06:02.559 START TEST rpc_plugins 00:06:02.559 ************************************ 00:06:02.559 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:06:02.559 17:40:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:02.559 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.559 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.559 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.559 17:40:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:02.559 17:40:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:02.559 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.559 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.559 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.559 17:40:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:02.559 { 00:06:02.559 "name": "Malloc1", 00:06:02.559 "aliases": [ 00:06:02.559 "d5077c12-bd4b-4f38-8554-56fc71f07ae5" 00:06:02.559 ], 00:06:02.559 "product_name": "Malloc disk", 00:06:02.559 "block_size": 4096, 00:06:02.559 "num_blocks": 256, 00:06:02.559 "uuid": "d5077c12-bd4b-4f38-8554-56fc71f07ae5", 00:06:02.559 "assigned_rate_limits": { 00:06:02.559 "rw_ios_per_sec": 0, 00:06:02.559 "rw_mbytes_per_sec": 0, 00:06:02.559 "r_mbytes_per_sec": 0, 00:06:02.559 "w_mbytes_per_sec": 0 00:06:02.559 }, 00:06:02.559 "claimed": false, 00:06:02.559 "zoned": false, 00:06:02.559 "supported_io_types": { 00:06:02.559 "read": true, 00:06:02.559 "write": true, 00:06:02.559 "unmap": true, 00:06:02.559 "flush": true, 00:06:02.559 "reset": true, 00:06:02.559 "nvme_admin": false, 00:06:02.559 "nvme_io": false, 00:06:02.559 "nvme_io_md": false, 00:06:02.559 "write_zeroes": true, 00:06:02.559 "zcopy": true, 00:06:02.559 "get_zone_info": false, 00:06:02.559 "zone_management": false, 00:06:02.559 "zone_append": false, 00:06:02.559 "compare": false, 00:06:02.559 "compare_and_write": false, 00:06:02.559 "abort": true, 00:06:02.559 "seek_hole": false, 00:06:02.559 "seek_data": false, 00:06:02.559 "copy": true, 00:06:02.559 "nvme_iov_md": false 00:06:02.559 }, 00:06:02.559 "memory_domains": [ 00:06:02.559 { 00:06:02.559 "dma_device_id": "system", 00:06:02.559 "dma_device_type": 1 00:06:02.559 }, 00:06:02.559 { 00:06:02.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.559 "dma_device_type": 2 00:06:02.559 } 00:06:02.559 ], 00:06:02.559 "driver_specific": {} 00:06:02.559 } 00:06:02.559 ]' 00:06:02.559 17:40:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:02.821 17:40:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:02.821 17:40:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:02.821 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.821 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.821 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.821 17:40:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:02.821 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.821 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.821 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.821 17:40:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:02.821 17:40:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:02.821 17:40:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:02.821 00:06:02.821 real 0m0.120s 00:06:02.821 user 0m0.062s 00:06:02.821 sys 0m0.017s 00:06:02.821 ************************************ 00:06:02.821 END TEST rpc_plugins 00:06:02.821 ************************************ 00:06:02.821 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.821 17:40:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:02.821 17:40:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:02.821 17:40:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.821 17:40:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.821 17:40:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.821 ************************************ 00:06:02.821 START TEST rpc_trace_cmd_test 00:06:02.821 ************************************ 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:02.821 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70539", 00:06:02.821 "tpoint_group_mask": "0x8", 00:06:02.821 "iscsi_conn": { 00:06:02.821 "mask": "0x2", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "scsi": { 00:06:02.821 "mask": "0x4", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "bdev": { 00:06:02.821 "mask": "0x8", 00:06:02.821 "tpoint_mask": "0xffffffffffffffff" 00:06:02.821 }, 00:06:02.821 "nvmf_rdma": { 00:06:02.821 "mask": "0x10", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "nvmf_tcp": { 00:06:02.821 "mask": "0x20", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "ftl": { 00:06:02.821 "mask": "0x40", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "blobfs": { 00:06:02.821 "mask": "0x80", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "dsa": { 00:06:02.821 "mask": "0x200", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "thread": { 00:06:02.821 "mask": "0x400", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "nvme_pcie": { 00:06:02.821 "mask": "0x800", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "iaa": { 00:06:02.821 "mask": "0x1000", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "nvme_tcp": { 00:06:02.821 "mask": "0x2000", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "bdev_nvme": { 00:06:02.821 "mask": "0x4000", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "sock": { 00:06:02.821 "mask": "0x8000", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "blob": { 00:06:02.821 "mask": "0x10000", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "bdev_raid": { 00:06:02.821 "mask": "0x20000", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 }, 00:06:02.821 "scheduler": { 00:06:02.821 "mask": "0x40000", 00:06:02.821 "tpoint_mask": "0x0" 00:06:02.821 } 00:06:02.821 }' 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:02.821 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:03.082 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:03.082 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:03.082 17:40:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:03.082 00:06:03.082 real 0m0.172s 00:06:03.082 user 0m0.132s 00:06:03.082 sys 0m0.031s 00:06:03.082 ************************************ 00:06:03.082 END TEST rpc_trace_cmd_test 00:06:03.082 ************************************ 00:06:03.082 17:40:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.082 17:40:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:03.082 17:40:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:03.082 17:40:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:03.082 17:40:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:03.082 17:40:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.082 17:40:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.082 17:40:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.082 ************************************ 00:06:03.082 START TEST rpc_daemon_integrity 00:06:03.082 ************************************ 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.082 17:40:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:03.082 { 00:06:03.082 "name": "Malloc2", 00:06:03.082 "aliases": [ 00:06:03.082 "723a96e4-700f-4235-b25b-2bbfeecbad79" 00:06:03.082 ], 00:06:03.082 "product_name": "Malloc disk", 00:06:03.082 "block_size": 512, 00:06:03.082 "num_blocks": 16384, 00:06:03.082 "uuid": "723a96e4-700f-4235-b25b-2bbfeecbad79", 00:06:03.082 "assigned_rate_limits": { 00:06:03.082 "rw_ios_per_sec": 0, 00:06:03.082 "rw_mbytes_per_sec": 0, 00:06:03.082 "r_mbytes_per_sec": 0, 00:06:03.082 "w_mbytes_per_sec": 0 00:06:03.082 }, 00:06:03.082 "claimed": false, 00:06:03.082 "zoned": false, 00:06:03.082 "supported_io_types": { 00:06:03.082 "read": true, 00:06:03.082 "write": true, 00:06:03.082 "unmap": true, 00:06:03.082 "flush": true, 00:06:03.082 "reset": true, 00:06:03.082 "nvme_admin": false, 00:06:03.083 "nvme_io": false, 00:06:03.083 "nvme_io_md": false, 00:06:03.083 "write_zeroes": true, 00:06:03.083 "zcopy": true, 00:06:03.083 "get_zone_info": false, 00:06:03.083 "zone_management": false, 00:06:03.083 "zone_append": false, 00:06:03.083 "compare": false, 00:06:03.083 "compare_and_write": false, 00:06:03.083 "abort": true, 00:06:03.083 "seek_hole": false, 00:06:03.083 "seek_data": false, 00:06:03.083 "copy": true, 00:06:03.083 "nvme_iov_md": false 00:06:03.083 }, 00:06:03.083 "memory_domains": [ 00:06:03.083 { 00:06:03.083 "dma_device_id": "system", 00:06:03.083 "dma_device_type": 1 00:06:03.083 }, 00:06:03.083 { 00:06:03.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.083 "dma_device_type": 2 00:06:03.083 } 00:06:03.083 ], 00:06:03.083 "driver_specific": {} 00:06:03.083 } 00:06:03.083 ]' 00:06:03.083 17:40:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:03.083 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:03.083 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:03.083 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.083 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.083 [2024-11-05 17:40:23.023223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:03.083 [2024-11-05 17:40:23.023283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:03.083 [2024-11-05 17:40:23.023305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:03.083 [2024-11-05 17:40:23.023317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:03.083 [2024-11-05 17:40:23.025682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:03.083 [2024-11-05 17:40:23.025716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:03.083 Passthru0 00:06:03.083 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.083 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:03.083 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.083 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.083 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.083 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:03.083 { 00:06:03.083 "name": "Malloc2", 00:06:03.083 "aliases": [ 00:06:03.083 "723a96e4-700f-4235-b25b-2bbfeecbad79" 00:06:03.083 ], 00:06:03.083 "product_name": "Malloc disk", 00:06:03.083 "block_size": 512, 00:06:03.083 "num_blocks": 16384, 00:06:03.083 "uuid": "723a96e4-700f-4235-b25b-2bbfeecbad79", 00:06:03.083 "assigned_rate_limits": { 00:06:03.083 "rw_ios_per_sec": 0, 00:06:03.083 "rw_mbytes_per_sec": 0, 00:06:03.083 "r_mbytes_per_sec": 0, 00:06:03.083 "w_mbytes_per_sec": 0 00:06:03.083 }, 00:06:03.083 "claimed": true, 00:06:03.083 "claim_type": "exclusive_write", 00:06:03.083 "zoned": false, 00:06:03.083 "supported_io_types": { 00:06:03.083 "read": true, 00:06:03.083 "write": true, 00:06:03.083 "unmap": true, 00:06:03.083 "flush": true, 00:06:03.083 "reset": true, 00:06:03.083 "nvme_admin": false, 00:06:03.083 "nvme_io": false, 00:06:03.083 "nvme_io_md": false, 00:06:03.083 "write_zeroes": true, 00:06:03.083 "zcopy": true, 00:06:03.083 "get_zone_info": false, 00:06:03.083 "zone_management": false, 00:06:03.083 "zone_append": false, 00:06:03.083 "compare": false, 00:06:03.083 "compare_and_write": false, 00:06:03.083 "abort": true, 00:06:03.083 "seek_hole": false, 00:06:03.083 "seek_data": false, 00:06:03.083 "copy": true, 00:06:03.083 "nvme_iov_md": false 00:06:03.083 }, 00:06:03.083 "memory_domains": [ 00:06:03.083 { 00:06:03.083 "dma_device_id": "system", 00:06:03.083 "dma_device_type": 1 00:06:03.083 }, 00:06:03.083 { 00:06:03.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.083 "dma_device_type": 2 00:06:03.083 } 00:06:03.083 ], 00:06:03.083 "driver_specific": {} 00:06:03.083 }, 00:06:03.083 { 00:06:03.083 "name": "Passthru0", 00:06:03.083 "aliases": [ 00:06:03.083 "99115c4a-d761-5f41-bc83-5eecf158b6dd" 00:06:03.083 ], 00:06:03.083 "product_name": "passthru", 00:06:03.083 "block_size": 512, 00:06:03.083 "num_blocks": 16384, 00:06:03.083 "uuid": "99115c4a-d761-5f41-bc83-5eecf158b6dd", 00:06:03.083 "assigned_rate_limits": { 00:06:03.083 "rw_ios_per_sec": 0, 00:06:03.083 "rw_mbytes_per_sec": 0, 00:06:03.083 "r_mbytes_per_sec": 0, 00:06:03.083 "w_mbytes_per_sec": 0 00:06:03.083 }, 00:06:03.083 "claimed": false, 00:06:03.083 "zoned": false, 00:06:03.083 "supported_io_types": { 00:06:03.083 "read": true, 00:06:03.083 "write": true, 00:06:03.083 "unmap": true, 00:06:03.083 "flush": true, 00:06:03.083 "reset": true, 00:06:03.083 "nvme_admin": false, 00:06:03.083 "nvme_io": false, 00:06:03.083 "nvme_io_md": false, 00:06:03.083 "write_zeroes": true, 00:06:03.083 "zcopy": true, 00:06:03.083 "get_zone_info": false, 00:06:03.083 "zone_management": false, 00:06:03.083 "zone_append": false, 00:06:03.083 "compare": false, 00:06:03.083 "compare_and_write": false, 00:06:03.083 "abort": true, 00:06:03.083 "seek_hole": false, 00:06:03.083 "seek_data": false, 00:06:03.083 "copy": true, 00:06:03.083 "nvme_iov_md": false 00:06:03.083 }, 00:06:03.083 "memory_domains": [ 00:06:03.083 { 00:06:03.083 "dma_device_id": "system", 00:06:03.083 "dma_device_type": 1 00:06:03.083 }, 00:06:03.083 { 00:06:03.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:03.083 "dma_device_type": 2 00:06:03.083 } 00:06:03.083 ], 00:06:03.083 "driver_specific": { 00:06:03.083 "passthru": { 00:06:03.083 "name": "Passthru0", 00:06:03.083 "base_bdev_name": "Malloc2" 00:06:03.083 } 00:06:03.083 } 00:06:03.083 } 00:06:03.083 ]' 00:06:03.083 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:03.345 00:06:03.345 real 0m0.230s 00:06:03.345 user 0m0.121s 00:06:03.345 sys 0m0.044s 00:06:03.345 ************************************ 00:06:03.345 END TEST rpc_daemon_integrity 00:06:03.345 ************************************ 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.345 17:40:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:03.345 17:40:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:03.345 17:40:23 rpc -- rpc/rpc.sh@84 -- # killprocess 70539 00:06:03.345 17:40:23 rpc -- common/autotest_common.sh@952 -- # '[' -z 70539 ']' 00:06:03.345 17:40:23 rpc -- common/autotest_common.sh@956 -- # kill -0 70539 00:06:03.345 17:40:23 rpc -- common/autotest_common.sh@957 -- # uname 00:06:03.345 17:40:23 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:03.345 17:40:23 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70539 00:06:03.345 17:40:23 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:03.345 17:40:23 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:03.345 killing process with pid 70539 00:06:03.345 17:40:23 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70539' 00:06:03.345 17:40:23 rpc -- common/autotest_common.sh@971 -- # kill 70539 00:06:03.345 17:40:23 rpc -- common/autotest_common.sh@976 -- # wait 70539 00:06:03.606 00:06:03.606 real 0m2.342s 00:06:03.606 user 0m2.725s 00:06:03.606 sys 0m0.658s 00:06:03.606 ************************************ 00:06:03.606 END TEST rpc 00:06:03.606 ************************************ 00:06:03.606 17:40:23 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.606 17:40:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.606 17:40:23 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:03.606 17:40:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.606 17:40:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.606 17:40:23 -- common/autotest_common.sh@10 -- # set +x 00:06:03.606 ************************************ 00:06:03.606 START TEST skip_rpc 00:06:03.606 ************************************ 00:06:03.606 17:40:23 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:03.868 * Looking for test storage... 00:06:03.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.868 17:40:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:03.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.868 --rc genhtml_branch_coverage=1 00:06:03.868 --rc genhtml_function_coverage=1 00:06:03.868 --rc genhtml_legend=1 00:06:03.868 --rc geninfo_all_blocks=1 00:06:03.868 --rc geninfo_unexecuted_blocks=1 00:06:03.868 00:06:03.868 ' 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:03.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.868 --rc genhtml_branch_coverage=1 00:06:03.868 --rc genhtml_function_coverage=1 00:06:03.868 --rc genhtml_legend=1 00:06:03.868 --rc geninfo_all_blocks=1 00:06:03.868 --rc geninfo_unexecuted_blocks=1 00:06:03.868 00:06:03.868 ' 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:03.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.868 --rc genhtml_branch_coverage=1 00:06:03.868 --rc genhtml_function_coverage=1 00:06:03.868 --rc genhtml_legend=1 00:06:03.868 --rc geninfo_all_blocks=1 00:06:03.868 --rc geninfo_unexecuted_blocks=1 00:06:03.868 00:06:03.868 ' 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:03.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.868 --rc genhtml_branch_coverage=1 00:06:03.868 --rc genhtml_function_coverage=1 00:06:03.868 --rc genhtml_legend=1 00:06:03.868 --rc geninfo_all_blocks=1 00:06:03.868 --rc geninfo_unexecuted_blocks=1 00:06:03.868 00:06:03.868 ' 00:06:03.868 17:40:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:03.868 17:40:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.868 17:40:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.868 17:40:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.868 ************************************ 00:06:03.868 START TEST skip_rpc 00:06:03.868 ************************************ 00:06:03.868 17:40:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:06:03.868 17:40:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70740 00:06:03.868 17:40:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.868 17:40:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:03.868 17:40:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:03.868 [2024-11-05 17:40:23.807590] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:03.868 [2024-11-05 17:40:23.807709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70740 ] 00:06:04.129 [2024-11-05 17:40:23.938655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.129 [2024-11-05 17:40:23.969352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.129 [2024-11-05 17:40:23.994085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70740 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 70740 ']' 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 70740 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70740 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70740' 00:06:09.468 killing process with pid 70740 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 70740 00:06:09.468 17:40:28 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 70740 00:06:09.468 00:06:09.468 real 0m5.327s 00:06:09.468 user 0m4.944s 00:06:09.468 sys 0m0.276s 00:06:09.468 17:40:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.468 17:40:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.468 ************************************ 00:06:09.468 END TEST skip_rpc 00:06:09.468 ************************************ 00:06:09.468 17:40:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:09.468 17:40:29 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:09.468 17:40:29 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.468 17:40:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.468 ************************************ 00:06:09.468 START TEST skip_rpc_with_json 00:06:09.468 ************************************ 00:06:09.468 17:40:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:09.468 17:40:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:09.468 17:40:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70822 00:06:09.468 17:40:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.468 17:40:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70822 00:06:09.468 17:40:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 70822 ']' 00:06:09.468 17:40:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.468 17:40:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.468 17:40:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.468 17:40:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.469 17:40:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.469 17:40:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.469 [2024-11-05 17:40:29.190383] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:09.469 [2024-11-05 17:40:29.190493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70822 ] 00:06:09.469 [2024-11-05 17:40:29.315419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.469 [2024-11-05 17:40:29.341232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.469 [2024-11-05 17:40:29.364384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.042 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.042 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:10.042 17:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:10.042 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.042 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.042 [2024-11-05 17:40:30.031602] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:10.303 request: 00:06:10.303 { 00:06:10.303 "trtype": "tcp", 00:06:10.303 "method": "nvmf_get_transports", 00:06:10.303 "req_id": 1 00:06:10.303 } 00:06:10.303 Got JSON-RPC error response 00:06:10.303 response: 00:06:10.303 { 00:06:10.303 "code": -19, 00:06:10.303 "message": "No such device" 00:06:10.303 } 00:06:10.303 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:10.303 17:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:10.303 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.303 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.303 [2024-11-05 17:40:30.043668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.303 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.303 17:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:10.303 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.303 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.303 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.303 17:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:10.303 { 00:06:10.303 "subsystems": [ 00:06:10.303 { 00:06:10.303 "subsystem": "fsdev", 00:06:10.303 "config": [ 00:06:10.303 { 00:06:10.303 "method": "fsdev_set_opts", 00:06:10.303 "params": { 00:06:10.303 "fsdev_io_pool_size": 65535, 00:06:10.303 "fsdev_io_cache_size": 256 00:06:10.303 } 00:06:10.303 } 00:06:10.303 ] 00:06:10.303 }, 00:06:10.303 { 00:06:10.303 "subsystem": "keyring", 00:06:10.303 "config": [] 00:06:10.303 }, 00:06:10.303 { 00:06:10.303 "subsystem": "iobuf", 00:06:10.303 "config": [ 00:06:10.303 { 00:06:10.303 "method": "iobuf_set_options", 00:06:10.303 "params": { 00:06:10.303 "small_pool_count": 8192, 00:06:10.303 "large_pool_count": 1024, 00:06:10.303 "small_bufsize": 8192, 00:06:10.303 "large_bufsize": 135168, 00:06:10.303 "enable_numa": false 00:06:10.303 } 00:06:10.303 } 00:06:10.303 ] 00:06:10.303 }, 00:06:10.303 { 00:06:10.303 "subsystem": "sock", 00:06:10.303 "config": [ 00:06:10.303 { 00:06:10.303 "method": "sock_set_default_impl", 00:06:10.303 "params": { 00:06:10.303 "impl_name": "posix" 00:06:10.303 } 00:06:10.303 }, 00:06:10.303 { 00:06:10.303 "method": "sock_impl_set_options", 00:06:10.303 "params": { 00:06:10.303 "impl_name": "ssl", 00:06:10.303 "recv_buf_size": 4096, 00:06:10.303 "send_buf_size": 4096, 00:06:10.303 "enable_recv_pipe": true, 00:06:10.303 "enable_quickack": false, 00:06:10.303 "enable_placement_id": 0, 00:06:10.303 "enable_zerocopy_send_server": true, 00:06:10.303 "enable_zerocopy_send_client": false, 00:06:10.303 "zerocopy_threshold": 0, 00:06:10.303 "tls_version": 0, 00:06:10.303 "enable_ktls": false 00:06:10.303 } 00:06:10.303 }, 00:06:10.303 { 00:06:10.303 "method": "sock_impl_set_options", 00:06:10.303 "params": { 00:06:10.303 "impl_name": "posix", 00:06:10.303 "recv_buf_size": 2097152, 00:06:10.303 "send_buf_size": 2097152, 00:06:10.303 "enable_recv_pipe": true, 00:06:10.303 "enable_quickack": false, 00:06:10.303 "enable_placement_id": 0, 00:06:10.303 "enable_zerocopy_send_server": true, 00:06:10.303 "enable_zerocopy_send_client": false, 00:06:10.303 "zerocopy_threshold": 0, 00:06:10.303 "tls_version": 0, 00:06:10.303 "enable_ktls": false 00:06:10.303 } 00:06:10.303 } 00:06:10.303 ] 00:06:10.303 }, 00:06:10.303 { 00:06:10.303 "subsystem": "vmd", 00:06:10.303 "config": [] 00:06:10.303 }, 00:06:10.303 { 00:06:10.303 "subsystem": "accel", 00:06:10.303 "config": [ 00:06:10.303 { 00:06:10.303 "method": "accel_set_options", 00:06:10.303 "params": { 00:06:10.303 "small_cache_size": 128, 00:06:10.303 "large_cache_size": 16, 00:06:10.303 "task_count": 2048, 00:06:10.303 "sequence_count": 2048, 00:06:10.303 "buf_count": 2048 00:06:10.303 } 00:06:10.303 } 00:06:10.303 ] 00:06:10.303 }, 00:06:10.303 { 00:06:10.303 "subsystem": "bdev", 00:06:10.303 "config": [ 00:06:10.303 { 00:06:10.303 "method": "bdev_set_options", 00:06:10.303 "params": { 00:06:10.303 "bdev_io_pool_size": 65535, 00:06:10.303 "bdev_io_cache_size": 256, 00:06:10.303 "bdev_auto_examine": true, 00:06:10.303 "iobuf_small_cache_size": 128, 00:06:10.303 "iobuf_large_cache_size": 16 00:06:10.303 } 00:06:10.303 }, 00:06:10.303 { 00:06:10.303 "method": "bdev_raid_set_options", 00:06:10.303 "params": { 00:06:10.303 "process_window_size_kb": 1024, 00:06:10.303 "process_max_bandwidth_mb_sec": 0 00:06:10.303 } 00:06:10.303 }, 00:06:10.303 { 00:06:10.303 "method": "bdev_iscsi_set_options", 00:06:10.303 "params": { 00:06:10.303 "timeout_sec": 30 00:06:10.303 } 00:06:10.303 }, 00:06:10.303 { 00:06:10.303 "method": "bdev_nvme_set_options", 00:06:10.303 "params": { 00:06:10.303 "action_on_timeout": "none", 00:06:10.303 "timeout_us": 0, 00:06:10.303 "timeout_admin_us": 0, 00:06:10.303 "keep_alive_timeout_ms": 10000, 00:06:10.303 "arbitration_burst": 0, 00:06:10.303 "low_priority_weight": 0, 00:06:10.303 "medium_priority_weight": 0, 00:06:10.303 "high_priority_weight": 0, 00:06:10.303 "nvme_adminq_poll_period_us": 10000, 00:06:10.303 "nvme_ioq_poll_period_us": 0, 00:06:10.303 "io_queue_requests": 0, 00:06:10.303 "delay_cmd_submit": true, 00:06:10.303 "transport_retry_count": 4, 00:06:10.303 "bdev_retry_count": 3, 00:06:10.303 "transport_ack_timeout": 0, 00:06:10.303 "ctrlr_loss_timeout_sec": 0, 00:06:10.303 "reconnect_delay_sec": 0, 00:06:10.303 "fast_io_fail_timeout_sec": 0, 00:06:10.303 "disable_auto_failback": false, 00:06:10.303 "generate_uuids": false, 00:06:10.303 "transport_tos": 0, 00:06:10.303 "nvme_error_stat": false, 00:06:10.303 "rdma_srq_size": 0, 00:06:10.303 "io_path_stat": false, 00:06:10.303 "allow_accel_sequence": false, 00:06:10.303 "rdma_max_cq_size": 0, 00:06:10.303 "rdma_cm_event_timeout_ms": 0, 00:06:10.303 "dhchap_digests": [ 00:06:10.303 "sha256", 00:06:10.303 "sha384", 00:06:10.304 "sha512" 00:06:10.304 ], 00:06:10.304 "dhchap_dhgroups": [ 00:06:10.304 "null", 00:06:10.304 "ffdhe2048", 00:06:10.304 "ffdhe3072", 00:06:10.304 "ffdhe4096", 00:06:10.304 "ffdhe6144", 00:06:10.304 "ffdhe8192" 00:06:10.304 ] 00:06:10.304 } 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "method": "bdev_nvme_set_hotplug", 00:06:10.304 "params": { 00:06:10.304 "period_us": 100000, 00:06:10.304 "enable": false 00:06:10.304 } 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "method": "bdev_wait_for_examine" 00:06:10.304 } 00:06:10.304 ] 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "subsystem": "scsi", 00:06:10.304 "config": null 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "subsystem": "scheduler", 00:06:10.304 "config": [ 00:06:10.304 { 00:06:10.304 "method": "framework_set_scheduler", 00:06:10.304 "params": { 00:06:10.304 "name": "static" 00:06:10.304 } 00:06:10.304 } 00:06:10.304 ] 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "subsystem": "vhost_scsi", 00:06:10.304 "config": [] 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "subsystem": "vhost_blk", 00:06:10.304 "config": [] 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "subsystem": "ublk", 00:06:10.304 "config": [] 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "subsystem": "nbd", 00:06:10.304 "config": [] 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "subsystem": "nvmf", 00:06:10.304 "config": [ 00:06:10.304 { 00:06:10.304 "method": "nvmf_set_config", 00:06:10.304 "params": { 00:06:10.304 "discovery_filter": "match_any", 00:06:10.304 "admin_cmd_passthru": { 00:06:10.304 "identify_ctrlr": false 00:06:10.304 }, 00:06:10.304 "dhchap_digests": [ 00:06:10.304 "sha256", 00:06:10.304 "sha384", 00:06:10.304 "sha512" 00:06:10.304 ], 00:06:10.304 "dhchap_dhgroups": [ 00:06:10.304 "null", 00:06:10.304 "ffdhe2048", 00:06:10.304 "ffdhe3072", 00:06:10.304 "ffdhe4096", 00:06:10.304 "ffdhe6144", 00:06:10.304 "ffdhe8192" 00:06:10.304 ] 00:06:10.304 } 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "method": "nvmf_set_max_subsystems", 00:06:10.304 "params": { 00:06:10.304 "max_subsystems": 1024 00:06:10.304 } 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "method": "nvmf_set_crdt", 00:06:10.304 "params": { 00:06:10.304 "crdt1": 0, 00:06:10.304 "crdt2": 0, 00:06:10.304 "crdt3": 0 00:06:10.304 } 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "method": "nvmf_create_transport", 00:06:10.304 "params": { 00:06:10.304 "trtype": "TCP", 00:06:10.304 "max_queue_depth": 128, 00:06:10.304 "max_io_qpairs_per_ctrlr": 127, 00:06:10.304 "in_capsule_data_size": 4096, 00:06:10.304 "max_io_size": 131072, 00:06:10.304 "io_unit_size": 131072, 00:06:10.304 "max_aq_depth": 128, 00:06:10.304 "num_shared_buffers": 511, 00:06:10.304 "buf_cache_size": 4294967295, 00:06:10.304 "dif_insert_or_strip": false, 00:06:10.304 "zcopy": false, 00:06:10.304 "c2h_success": true, 00:06:10.304 "sock_priority": 0, 00:06:10.304 "abort_timeout_sec": 1, 00:06:10.304 "ack_timeout": 0, 00:06:10.304 "data_wr_pool_size": 0 00:06:10.304 } 00:06:10.304 } 00:06:10.304 ] 00:06:10.304 }, 00:06:10.304 { 00:06:10.304 "subsystem": "iscsi", 00:06:10.304 "config": [ 00:06:10.304 { 00:06:10.304 "method": "iscsi_set_options", 00:06:10.304 "params": { 00:06:10.304 "node_base": "iqn.2016-06.io.spdk", 00:06:10.304 "max_sessions": 128, 00:06:10.304 "max_connections_per_session": 2, 00:06:10.304 "max_queue_depth": 64, 00:06:10.304 "default_time2wait": 2, 00:06:10.304 "default_time2retain": 20, 00:06:10.304 "first_burst_length": 8192, 00:06:10.304 "immediate_data": true, 00:06:10.304 "allow_duplicated_isid": false, 00:06:10.304 "error_recovery_level": 0, 00:06:10.304 "nop_timeout": 60, 00:06:10.304 "nop_in_interval": 30, 00:06:10.304 "disable_chap": false, 00:06:10.304 "require_chap": false, 00:06:10.304 "mutual_chap": false, 00:06:10.304 "chap_group": 0, 00:06:10.304 "max_large_datain_per_connection": 64, 00:06:10.304 "max_r2t_per_connection": 4, 00:06:10.304 "pdu_pool_size": 36864, 00:06:10.304 "immediate_data_pool_size": 16384, 00:06:10.304 "data_out_pool_size": 2048 00:06:10.304 } 00:06:10.304 } 00:06:10.304 ] 00:06:10.304 } 00:06:10.304 ] 00:06:10.304 } 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70822 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 70822 ']' 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 70822 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70822 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:10.304 killing process with pid 70822 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70822' 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 70822 00:06:10.304 17:40:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 70822 00:06:10.565 17:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70851 00:06:10.565 17:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:10.565 17:40:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:15.850 17:40:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70851 00:06:15.850 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 70851 ']' 00:06:15.850 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 70851 00:06:15.850 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:15.850 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:15.850 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70851 00:06:15.850 killing process with pid 70851 00:06:15.850 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:15.850 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:15.850 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70851' 00:06:15.850 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 70851 00:06:15.850 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 70851 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:16.110 00:06:16.110 real 0m6.743s 00:06:16.110 user 0m6.325s 00:06:16.110 sys 0m0.648s 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.110 ************************************ 00:06:16.110 END TEST skip_rpc_with_json 00:06:16.110 ************************************ 00:06:16.110 17:40:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:16.110 17:40:35 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:16.110 17:40:35 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.110 17:40:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.110 ************************************ 00:06:16.110 START TEST skip_rpc_with_delay 00:06:16.110 ************************************ 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:16.110 17:40:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:16.110 [2024-11-05 17:40:35.983149] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:16.110 17:40:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:16.110 17:40:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.110 17:40:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:16.110 17:40:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.110 00:06:16.110 real 0m0.111s 00:06:16.110 user 0m0.056s 00:06:16.110 sys 0m0.053s 00:06:16.110 ************************************ 00:06:16.110 17:40:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:16.110 17:40:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:16.110 END TEST skip_rpc_with_delay 00:06:16.110 ************************************ 00:06:16.110 17:40:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:16.110 17:40:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:16.110 17:40:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:16.110 17:40:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:16.110 17:40:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.110 17:40:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.110 ************************************ 00:06:16.110 START TEST exit_on_failed_rpc_init 00:06:16.110 ************************************ 00:06:16.110 17:40:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:06:16.110 17:40:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70962 00:06:16.110 17:40:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70962 00:06:16.110 17:40:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 70962 ']' 00:06:16.110 17:40:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.110 17:40:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.110 17:40:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:16.110 17:40:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.110 17:40:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:16.110 17:40:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.370 [2024-11-05 17:40:36.155201] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:16.370 [2024-11-05 17:40:36.155305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70962 ] 00:06:16.370 [2024-11-05 17:40:36.280566] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.370 [2024-11-05 17:40:36.305288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.370 [2024-11-05 17:40:36.328777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:17.313 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:17.313 [2024-11-05 17:40:37.081669] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:17.313 [2024-11-05 17:40:37.081799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70980 ] 00:06:17.313 [2024-11-05 17:40:37.210033] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.313 [2024-11-05 17:40:37.241376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.313 [2024-11-05 17:40:37.260180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.313 [2024-11-05 17:40:37.260262] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:17.313 [2024-11-05 17:40:37.260278] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:17.313 [2024-11-05 17:40:37.260289] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70962 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 70962 ']' 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 70962 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70962 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:17.574 killing process with pid 70962 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70962' 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 70962 00:06:17.574 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 70962 00:06:17.836 00:06:17.836 real 0m1.568s 00:06:17.836 user 0m1.688s 00:06:17.836 sys 0m0.414s 00:06:17.836 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.836 ************************************ 00:06:17.836 END TEST exit_on_failed_rpc_init 00:06:17.836 ************************************ 00:06:17.836 17:40:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:17.836 17:40:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:17.836 00:06:17.836 real 0m14.126s 00:06:17.836 user 0m13.159s 00:06:17.836 sys 0m1.583s 00:06:17.836 17:40:37 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.836 17:40:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.836 ************************************ 00:06:17.836 END TEST skip_rpc 00:06:17.836 ************************************ 00:06:17.836 17:40:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:17.836 17:40:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.836 17:40:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.836 17:40:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.836 ************************************ 00:06:17.836 START TEST rpc_client 00:06:17.836 ************************************ 00:06:17.836 17:40:37 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:17.836 * Looking for test storage... 00:06:17.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:17.836 17:40:37 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:17.836 17:40:37 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:17.836 17:40:37 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:18.098 17:40:37 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.098 17:40:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:18.098 17:40:37 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.098 17:40:37 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:18.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.098 --rc genhtml_branch_coverage=1 00:06:18.098 --rc genhtml_function_coverage=1 00:06:18.098 --rc genhtml_legend=1 00:06:18.098 --rc geninfo_all_blocks=1 00:06:18.098 --rc geninfo_unexecuted_blocks=1 00:06:18.098 00:06:18.098 ' 00:06:18.098 17:40:37 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:18.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.098 --rc genhtml_branch_coverage=1 00:06:18.098 --rc genhtml_function_coverage=1 00:06:18.098 --rc genhtml_legend=1 00:06:18.098 --rc geninfo_all_blocks=1 00:06:18.098 --rc geninfo_unexecuted_blocks=1 00:06:18.098 00:06:18.098 ' 00:06:18.098 17:40:37 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:18.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.098 --rc genhtml_branch_coverage=1 00:06:18.098 --rc genhtml_function_coverage=1 00:06:18.098 --rc genhtml_legend=1 00:06:18.098 --rc geninfo_all_blocks=1 00:06:18.098 --rc geninfo_unexecuted_blocks=1 00:06:18.098 00:06:18.098 ' 00:06:18.098 17:40:37 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:18.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.098 --rc genhtml_branch_coverage=1 00:06:18.098 --rc genhtml_function_coverage=1 00:06:18.098 --rc genhtml_legend=1 00:06:18.098 --rc geninfo_all_blocks=1 00:06:18.098 --rc geninfo_unexecuted_blocks=1 00:06:18.098 00:06:18.098 ' 00:06:18.098 17:40:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:18.098 OK 00:06:18.098 17:40:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:18.098 ************************************ 00:06:18.098 END TEST rpc_client 00:06:18.098 ************************************ 00:06:18.098 00:06:18.098 real 0m0.194s 00:06:18.098 user 0m0.097s 00:06:18.098 sys 0m0.103s 00:06:18.098 17:40:37 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.098 17:40:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:18.098 17:40:37 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:18.098 17:40:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:18.098 17:40:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.098 17:40:37 -- common/autotest_common.sh@10 -- # set +x 00:06:18.098 ************************************ 00:06:18.098 START TEST json_config 00:06:18.098 ************************************ 00:06:18.098 17:40:37 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:18.099 17:40:38 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:18.099 17:40:38 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:18.099 17:40:38 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:18.361 17:40:38 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:18.361 17:40:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.361 17:40:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.361 17:40:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.361 17:40:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.361 17:40:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.361 17:40:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.361 17:40:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.361 17:40:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.361 17:40:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.361 17:40:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.361 17:40:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.361 17:40:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:18.361 17:40:38 json_config -- scripts/common.sh@345 -- # : 1 00:06:18.361 17:40:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.361 17:40:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.361 17:40:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:18.361 17:40:38 json_config -- scripts/common.sh@353 -- # local d=1 00:06:18.361 17:40:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.361 17:40:38 json_config -- scripts/common.sh@355 -- # echo 1 00:06:18.361 17:40:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.361 17:40:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:18.361 17:40:38 json_config -- scripts/common.sh@353 -- # local d=2 00:06:18.361 17:40:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.361 17:40:38 json_config -- scripts/common.sh@355 -- # echo 2 00:06:18.361 17:40:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.361 17:40:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.361 17:40:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.361 17:40:38 json_config -- scripts/common.sh@368 -- # return 0 00:06:18.361 17:40:38 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.362 17:40:38 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:18.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.362 --rc genhtml_branch_coverage=1 00:06:18.362 --rc genhtml_function_coverage=1 00:06:18.362 --rc genhtml_legend=1 00:06:18.362 --rc geninfo_all_blocks=1 00:06:18.362 --rc geninfo_unexecuted_blocks=1 00:06:18.362 00:06:18.362 ' 00:06:18.362 17:40:38 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:18.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.362 --rc genhtml_branch_coverage=1 00:06:18.362 --rc genhtml_function_coverage=1 00:06:18.362 --rc genhtml_legend=1 00:06:18.362 --rc geninfo_all_blocks=1 00:06:18.362 --rc geninfo_unexecuted_blocks=1 00:06:18.362 00:06:18.362 ' 00:06:18.362 17:40:38 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:18.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.362 --rc genhtml_branch_coverage=1 00:06:18.362 --rc genhtml_function_coverage=1 00:06:18.362 --rc genhtml_legend=1 00:06:18.362 --rc geninfo_all_blocks=1 00:06:18.362 --rc geninfo_unexecuted_blocks=1 00:06:18.362 00:06:18.362 ' 00:06:18.362 17:40:38 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:18.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.362 --rc genhtml_branch_coverage=1 00:06:18.362 --rc genhtml_function_coverage=1 00:06:18.362 --rc genhtml_legend=1 00:06:18.362 --rc geninfo_all_blocks=1 00:06:18.362 --rc geninfo_unexecuted_blocks=1 00:06:18.362 00:06:18.362 ' 00:06:18.362 17:40:38 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e9f1c60b-b6ca-439f-b9ad-653632c758e1 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e9f1c60b-b6ca-439f-b9ad-653632c758e1 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.362 17:40:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.362 17:40:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.362 17:40:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.362 17:40:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.362 17:40:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.362 17:40:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.362 17:40:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.362 17:40:38 json_config -- paths/export.sh@5 -- # export PATH 00:06:18.362 17:40:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@51 -- # : 0 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.362 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.362 17:40:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.362 17:40:38 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:18.362 17:40:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:18.362 17:40:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:18.362 17:40:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:18.362 WARNING: No tests are enabled so not running JSON configuration tests 00:06:18.362 17:40:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:18.362 17:40:38 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:18.362 17:40:38 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:18.362 ************************************ 00:06:18.362 END TEST json_config 00:06:18.362 ************************************ 00:06:18.362 00:06:18.362 real 0m0.145s 00:06:18.362 user 0m0.086s 00:06:18.362 sys 0m0.063s 00:06:18.362 17:40:38 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.362 17:40:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.362 17:40:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:18.362 17:40:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:18.362 17:40:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.362 17:40:38 -- common/autotest_common.sh@10 -- # set +x 00:06:18.362 ************************************ 00:06:18.362 START TEST json_config_extra_key 00:06:18.362 ************************************ 00:06:18.362 17:40:38 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:18.362 17:40:38 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:18.362 17:40:38 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:18.362 17:40:38 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:18.362 17:40:38 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.362 17:40:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:18.362 17:40:38 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.362 17:40:38 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:18.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.362 --rc genhtml_branch_coverage=1 00:06:18.362 --rc genhtml_function_coverage=1 00:06:18.362 --rc genhtml_legend=1 00:06:18.362 --rc geninfo_all_blocks=1 00:06:18.362 --rc geninfo_unexecuted_blocks=1 00:06:18.362 00:06:18.362 ' 00:06:18.362 17:40:38 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:18.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.363 --rc genhtml_branch_coverage=1 00:06:18.363 --rc genhtml_function_coverage=1 00:06:18.363 --rc genhtml_legend=1 00:06:18.363 --rc geninfo_all_blocks=1 00:06:18.363 --rc geninfo_unexecuted_blocks=1 00:06:18.363 00:06:18.363 ' 00:06:18.363 17:40:38 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:18.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.363 --rc genhtml_branch_coverage=1 00:06:18.363 --rc genhtml_function_coverage=1 00:06:18.363 --rc genhtml_legend=1 00:06:18.363 --rc geninfo_all_blocks=1 00:06:18.363 --rc geninfo_unexecuted_blocks=1 00:06:18.363 00:06:18.363 ' 00:06:18.363 17:40:38 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:18.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.363 --rc genhtml_branch_coverage=1 00:06:18.363 --rc genhtml_function_coverage=1 00:06:18.363 --rc genhtml_legend=1 00:06:18.363 --rc geninfo_all_blocks=1 00:06:18.363 --rc geninfo_unexecuted_blocks=1 00:06:18.363 00:06:18.363 ' 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e9f1c60b-b6ca-439f-b9ad-653632c758e1 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e9f1c60b-b6ca-439f-b9ad-653632c758e1 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.363 17:40:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.363 17:40:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.363 17:40:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.363 17:40:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.363 17:40:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.363 17:40:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.363 17:40:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.363 17:40:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:18.363 17:40:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.363 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.363 17:40:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:18.363 INFO: launching applications... 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:18.363 17:40:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:18.363 17:40:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:18.363 17:40:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:18.363 17:40:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:18.363 17:40:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:18.363 17:40:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:18.363 17:40:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.363 17:40:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.363 17:40:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71157 00:06:18.363 Waiting for target to run... 00:06:18.363 17:40:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:18.363 17:40:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71157 /var/tmp/spdk_tgt.sock 00:06:18.363 17:40:38 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 71157 ']' 00:06:18.363 17:40:38 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.363 17:40:38 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:18.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.363 17:40:38 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.363 17:40:38 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:18.363 17:40:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:18.363 17:40:38 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:18.622 [2024-11-05 17:40:38.403540] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:18.622 [2024-11-05 17:40:38.403669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71157 ] 00:06:18.882 [2024-11-05 17:40:38.696528] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.882 [2024-11-05 17:40:38.721764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.882 [2024-11-05 17:40:38.735428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.454 17:40:39 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.454 00:06:19.454 17:40:39 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:19.454 17:40:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:19.454 INFO: shutting down applications... 00:06:19.454 17:40:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:19.454 17:40:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:19.454 17:40:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:19.454 17:40:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.454 17:40:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71157 ]] 00:06:19.454 17:40:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71157 00:06:19.454 17:40:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.454 17:40:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.454 17:40:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71157 00:06:19.454 17:40:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.067 17:40:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.067 17:40:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.067 17:40:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71157 00:06:20.067 17:40:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:20.067 17:40:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:20.067 17:40:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:20.067 SPDK target shutdown done 00:06:20.067 17:40:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:20.067 Success 00:06:20.067 17:40:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:20.067 ************************************ 00:06:20.067 END TEST json_config_extra_key 00:06:20.067 ************************************ 00:06:20.067 00:06:20.067 real 0m1.568s 00:06:20.067 user 0m1.293s 00:06:20.067 sys 0m0.377s 00:06:20.067 17:40:39 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.067 17:40:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:20.067 17:40:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:20.067 17:40:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.067 17:40:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.067 17:40:39 -- common/autotest_common.sh@10 -- # set +x 00:06:20.067 ************************************ 00:06:20.067 START TEST alias_rpc 00:06:20.067 ************************************ 00:06:20.067 17:40:39 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:20.067 * Looking for test storage... 00:06:20.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:20.067 17:40:39 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:20.067 17:40:39 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:20.067 17:40:39 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:20.067 17:40:39 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.067 17:40:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:20.067 17:40:39 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.067 17:40:39 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:20.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.068 --rc genhtml_branch_coverage=1 00:06:20.068 --rc genhtml_function_coverage=1 00:06:20.068 --rc genhtml_legend=1 00:06:20.068 --rc geninfo_all_blocks=1 00:06:20.068 --rc geninfo_unexecuted_blocks=1 00:06:20.068 00:06:20.068 ' 00:06:20.068 17:40:39 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.068 --rc genhtml_branch_coverage=1 00:06:20.068 --rc genhtml_function_coverage=1 00:06:20.068 --rc genhtml_legend=1 00:06:20.068 --rc geninfo_all_blocks=1 00:06:20.068 --rc geninfo_unexecuted_blocks=1 00:06:20.068 00:06:20.068 ' 00:06:20.068 17:40:39 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.068 --rc genhtml_branch_coverage=1 00:06:20.068 --rc genhtml_function_coverage=1 00:06:20.068 --rc genhtml_legend=1 00:06:20.068 --rc geninfo_all_blocks=1 00:06:20.068 --rc geninfo_unexecuted_blocks=1 00:06:20.068 00:06:20.068 ' 00:06:20.068 17:40:39 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:20.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.068 --rc genhtml_branch_coverage=1 00:06:20.068 --rc genhtml_function_coverage=1 00:06:20.068 --rc genhtml_legend=1 00:06:20.068 --rc geninfo_all_blocks=1 00:06:20.068 --rc geninfo_unexecuted_blocks=1 00:06:20.068 00:06:20.068 ' 00:06:20.068 17:40:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:20.068 17:40:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71231 00:06:20.068 17:40:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71231 00:06:20.068 17:40:39 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 71231 ']' 00:06:20.068 17:40:39 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.068 17:40:39 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.068 17:40:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.068 17:40:39 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.068 17:40:39 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.068 17:40:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.068 [2024-11-05 17:40:40.029181] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:20.068 [2024-11-05 17:40:40.029310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71231 ] 00:06:20.328 [2024-11-05 17:40:40.160715] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.328 [2024-11-05 17:40:40.185490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.328 [2024-11-05 17:40:40.210167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.897 17:40:40 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:20.897 17:40:40 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:20.897 17:40:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:21.156 17:40:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71231 00:06:21.156 17:40:41 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 71231 ']' 00:06:21.156 17:40:41 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 71231 00:06:21.156 17:40:41 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:21.156 17:40:41 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:21.156 17:40:41 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71231 00:06:21.156 17:40:41 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:21.156 17:40:41 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:21.156 killing process with pid 71231 00:06:21.156 17:40:41 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71231' 00:06:21.156 17:40:41 alias_rpc -- common/autotest_common.sh@971 -- # kill 71231 00:06:21.156 17:40:41 alias_rpc -- common/autotest_common.sh@976 -- # wait 71231 00:06:21.723 00:06:21.723 real 0m1.612s 00:06:21.723 user 0m1.714s 00:06:21.723 sys 0m0.416s 00:06:21.723 17:40:41 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.723 ************************************ 00:06:21.723 END TEST alias_rpc 00:06:21.723 ************************************ 00:06:21.723 17:40:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.723 17:40:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:21.723 17:40:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:21.723 17:40:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:21.723 17:40:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.723 17:40:41 -- common/autotest_common.sh@10 -- # set +x 00:06:21.723 ************************************ 00:06:21.723 START TEST spdkcli_tcp 00:06:21.723 ************************************ 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:21.723 * Looking for test storage... 00:06:21.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.723 17:40:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:21.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.723 --rc genhtml_branch_coverage=1 00:06:21.723 --rc genhtml_function_coverage=1 00:06:21.723 --rc genhtml_legend=1 00:06:21.723 --rc geninfo_all_blocks=1 00:06:21.723 --rc geninfo_unexecuted_blocks=1 00:06:21.723 00:06:21.723 ' 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:21.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.723 --rc genhtml_branch_coverage=1 00:06:21.723 --rc genhtml_function_coverage=1 00:06:21.723 --rc genhtml_legend=1 00:06:21.723 --rc geninfo_all_blocks=1 00:06:21.723 --rc geninfo_unexecuted_blocks=1 00:06:21.723 00:06:21.723 ' 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:21.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.723 --rc genhtml_branch_coverage=1 00:06:21.723 --rc genhtml_function_coverage=1 00:06:21.723 --rc genhtml_legend=1 00:06:21.723 --rc geninfo_all_blocks=1 00:06:21.723 --rc geninfo_unexecuted_blocks=1 00:06:21.723 00:06:21.723 ' 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:21.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.723 --rc genhtml_branch_coverage=1 00:06:21.723 --rc genhtml_function_coverage=1 00:06:21.723 --rc genhtml_legend=1 00:06:21.723 --rc geninfo_all_blocks=1 00:06:21.723 --rc geninfo_unexecuted_blocks=1 00:06:21.723 00:06:21.723 ' 00:06:21.723 17:40:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:21.723 17:40:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:21.723 17:40:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:21.723 17:40:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:21.723 17:40:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:21.723 17:40:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:21.723 17:40:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.723 17:40:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71310 00:06:21.723 17:40:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71310 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 71310 ']' 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.723 17:40:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.723 17:40:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.723 [2024-11-05 17:40:41.698819] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:21.723 [2024-11-05 17:40:41.698938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71310 ] 00:06:21.982 [2024-11-05 17:40:41.829025] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.982 [2024-11-05 17:40:41.853595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.982 [2024-11-05 17:40:41.877918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.982 [2024-11-05 17:40:41.877933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.547 17:40:42 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:22.547 17:40:42 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:22.547 17:40:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71327 00:06:22.547 17:40:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:22.547 17:40:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:22.806 [ 00:06:22.806 "bdev_malloc_delete", 00:06:22.806 "bdev_malloc_create", 00:06:22.806 "bdev_null_resize", 00:06:22.806 "bdev_null_delete", 00:06:22.806 "bdev_null_create", 00:06:22.806 "bdev_nvme_cuse_unregister", 00:06:22.806 "bdev_nvme_cuse_register", 00:06:22.806 "bdev_opal_new_user", 00:06:22.806 "bdev_opal_set_lock_state", 00:06:22.806 "bdev_opal_delete", 00:06:22.806 "bdev_opal_get_info", 00:06:22.806 "bdev_opal_create", 00:06:22.806 "bdev_nvme_opal_revert", 00:06:22.806 "bdev_nvme_opal_init", 00:06:22.806 "bdev_nvme_send_cmd", 00:06:22.806 "bdev_nvme_set_keys", 00:06:22.806 "bdev_nvme_get_path_iostat", 00:06:22.806 "bdev_nvme_get_mdns_discovery_info", 00:06:22.806 "bdev_nvme_stop_mdns_discovery", 00:06:22.806 "bdev_nvme_start_mdns_discovery", 00:06:22.806 "bdev_nvme_set_multipath_policy", 00:06:22.806 "bdev_nvme_set_preferred_path", 00:06:22.806 "bdev_nvme_get_io_paths", 00:06:22.806 "bdev_nvme_remove_error_injection", 00:06:22.806 "bdev_nvme_add_error_injection", 00:06:22.806 "bdev_nvme_get_discovery_info", 00:06:22.806 "bdev_nvme_stop_discovery", 00:06:22.806 "bdev_nvme_start_discovery", 00:06:22.806 "bdev_nvme_get_controller_health_info", 00:06:22.806 "bdev_nvme_disable_controller", 00:06:22.806 "bdev_nvme_enable_controller", 00:06:22.806 "bdev_nvme_reset_controller", 00:06:22.806 "bdev_nvme_get_transport_statistics", 00:06:22.806 "bdev_nvme_apply_firmware", 00:06:22.806 "bdev_nvme_detach_controller", 00:06:22.806 "bdev_nvme_get_controllers", 00:06:22.806 "bdev_nvme_attach_controller", 00:06:22.806 "bdev_nvme_set_hotplug", 00:06:22.806 "bdev_nvme_set_options", 00:06:22.806 "bdev_passthru_delete", 00:06:22.806 "bdev_passthru_create", 00:06:22.806 "bdev_lvol_set_parent_bdev", 00:06:22.806 "bdev_lvol_set_parent", 00:06:22.806 "bdev_lvol_check_shallow_copy", 00:06:22.806 "bdev_lvol_start_shallow_copy", 00:06:22.806 "bdev_lvol_grow_lvstore", 00:06:22.806 "bdev_lvol_get_lvols", 00:06:22.806 "bdev_lvol_get_lvstores", 00:06:22.806 "bdev_lvol_delete", 00:06:22.806 "bdev_lvol_set_read_only", 00:06:22.806 "bdev_lvol_resize", 00:06:22.806 "bdev_lvol_decouple_parent", 00:06:22.806 "bdev_lvol_inflate", 00:06:22.806 "bdev_lvol_rename", 00:06:22.806 "bdev_lvol_clone_bdev", 00:06:22.806 "bdev_lvol_clone", 00:06:22.806 "bdev_lvol_snapshot", 00:06:22.806 "bdev_lvol_create", 00:06:22.806 "bdev_lvol_delete_lvstore", 00:06:22.806 "bdev_lvol_rename_lvstore", 00:06:22.806 "bdev_lvol_create_lvstore", 00:06:22.806 "bdev_raid_set_options", 00:06:22.806 "bdev_raid_remove_base_bdev", 00:06:22.806 "bdev_raid_add_base_bdev", 00:06:22.806 "bdev_raid_delete", 00:06:22.806 "bdev_raid_create", 00:06:22.806 "bdev_raid_get_bdevs", 00:06:22.806 "bdev_error_inject_error", 00:06:22.806 "bdev_error_delete", 00:06:22.806 "bdev_error_create", 00:06:22.806 "bdev_split_delete", 00:06:22.806 "bdev_split_create", 00:06:22.806 "bdev_delay_delete", 00:06:22.806 "bdev_delay_create", 00:06:22.806 "bdev_delay_update_latency", 00:06:22.806 "bdev_zone_block_delete", 00:06:22.806 "bdev_zone_block_create", 00:06:22.806 "blobfs_create", 00:06:22.806 "blobfs_detect", 00:06:22.806 "blobfs_set_cache_size", 00:06:22.806 "bdev_xnvme_delete", 00:06:22.806 "bdev_xnvme_create", 00:06:22.806 "bdev_aio_delete", 00:06:22.806 "bdev_aio_rescan", 00:06:22.806 "bdev_aio_create", 00:06:22.806 "bdev_ftl_set_property", 00:06:22.806 "bdev_ftl_get_properties", 00:06:22.806 "bdev_ftl_get_stats", 00:06:22.806 "bdev_ftl_unmap", 00:06:22.806 "bdev_ftl_unload", 00:06:22.806 "bdev_ftl_delete", 00:06:22.806 "bdev_ftl_load", 00:06:22.806 "bdev_ftl_create", 00:06:22.806 "bdev_virtio_attach_controller", 00:06:22.806 "bdev_virtio_scsi_get_devices", 00:06:22.806 "bdev_virtio_detach_controller", 00:06:22.806 "bdev_virtio_blk_set_hotplug", 00:06:22.806 "bdev_iscsi_delete", 00:06:22.806 "bdev_iscsi_create", 00:06:22.806 "bdev_iscsi_set_options", 00:06:22.806 "accel_error_inject_error", 00:06:22.806 "ioat_scan_accel_module", 00:06:22.806 "dsa_scan_accel_module", 00:06:22.806 "iaa_scan_accel_module", 00:06:22.806 "keyring_file_remove_key", 00:06:22.806 "keyring_file_add_key", 00:06:22.806 "keyring_linux_set_options", 00:06:22.806 "fsdev_aio_delete", 00:06:22.806 "fsdev_aio_create", 00:06:22.806 "iscsi_get_histogram", 00:06:22.806 "iscsi_enable_histogram", 00:06:22.806 "iscsi_set_options", 00:06:22.806 "iscsi_get_auth_groups", 00:06:22.806 "iscsi_auth_group_remove_secret", 00:06:22.806 "iscsi_auth_group_add_secret", 00:06:22.806 "iscsi_delete_auth_group", 00:06:22.806 "iscsi_create_auth_group", 00:06:22.806 "iscsi_set_discovery_auth", 00:06:22.806 "iscsi_get_options", 00:06:22.806 "iscsi_target_node_request_logout", 00:06:22.806 "iscsi_target_node_set_redirect", 00:06:22.806 "iscsi_target_node_set_auth", 00:06:22.806 "iscsi_target_node_add_lun", 00:06:22.806 "iscsi_get_stats", 00:06:22.806 "iscsi_get_connections", 00:06:22.806 "iscsi_portal_group_set_auth", 00:06:22.806 "iscsi_start_portal_group", 00:06:22.806 "iscsi_delete_portal_group", 00:06:22.806 "iscsi_create_portal_group", 00:06:22.806 "iscsi_get_portal_groups", 00:06:22.806 "iscsi_delete_target_node", 00:06:22.806 "iscsi_target_node_remove_pg_ig_maps", 00:06:22.806 "iscsi_target_node_add_pg_ig_maps", 00:06:22.806 "iscsi_create_target_node", 00:06:22.806 "iscsi_get_target_nodes", 00:06:22.806 "iscsi_delete_initiator_group", 00:06:22.806 "iscsi_initiator_group_remove_initiators", 00:06:22.806 "iscsi_initiator_group_add_initiators", 00:06:22.806 "iscsi_create_initiator_group", 00:06:22.806 "iscsi_get_initiator_groups", 00:06:22.806 "nvmf_set_crdt", 00:06:22.806 "nvmf_set_config", 00:06:22.806 "nvmf_set_max_subsystems", 00:06:22.806 "nvmf_stop_mdns_prr", 00:06:22.806 "nvmf_publish_mdns_prr", 00:06:22.806 "nvmf_subsystem_get_listeners", 00:06:22.806 "nvmf_subsystem_get_qpairs", 00:06:22.806 "nvmf_subsystem_get_controllers", 00:06:22.806 "nvmf_get_stats", 00:06:22.806 "nvmf_get_transports", 00:06:22.806 "nvmf_create_transport", 00:06:22.806 "nvmf_get_targets", 00:06:22.806 "nvmf_delete_target", 00:06:22.806 "nvmf_create_target", 00:06:22.806 "nvmf_subsystem_allow_any_host", 00:06:22.806 "nvmf_subsystem_set_keys", 00:06:22.806 "nvmf_subsystem_remove_host", 00:06:22.806 "nvmf_subsystem_add_host", 00:06:22.806 "nvmf_ns_remove_host", 00:06:22.806 "nvmf_ns_add_host", 00:06:22.806 "nvmf_subsystem_remove_ns", 00:06:22.806 "nvmf_subsystem_set_ns_ana_group", 00:06:22.806 "nvmf_subsystem_add_ns", 00:06:22.806 "nvmf_subsystem_listener_set_ana_state", 00:06:22.806 "nvmf_discovery_get_referrals", 00:06:22.806 "nvmf_discovery_remove_referral", 00:06:22.806 "nvmf_discovery_add_referral", 00:06:22.806 "nvmf_subsystem_remove_listener", 00:06:22.806 "nvmf_subsystem_add_listener", 00:06:22.806 "nvmf_delete_subsystem", 00:06:22.806 "nvmf_create_subsystem", 00:06:22.806 "nvmf_get_subsystems", 00:06:22.806 "env_dpdk_get_mem_stats", 00:06:22.806 "nbd_get_disks", 00:06:22.806 "nbd_stop_disk", 00:06:22.806 "nbd_start_disk", 00:06:22.806 "ublk_recover_disk", 00:06:22.806 "ublk_get_disks", 00:06:22.806 "ublk_stop_disk", 00:06:22.806 "ublk_start_disk", 00:06:22.806 "ublk_destroy_target", 00:06:22.806 "ublk_create_target", 00:06:22.806 "virtio_blk_create_transport", 00:06:22.806 "virtio_blk_get_transports", 00:06:22.806 "vhost_controller_set_coalescing", 00:06:22.807 "vhost_get_controllers", 00:06:22.807 "vhost_delete_controller", 00:06:22.807 "vhost_create_blk_controller", 00:06:22.807 "vhost_scsi_controller_remove_target", 00:06:22.807 "vhost_scsi_controller_add_target", 00:06:22.807 "vhost_start_scsi_controller", 00:06:22.807 "vhost_create_scsi_controller", 00:06:22.807 "thread_set_cpumask", 00:06:22.807 "scheduler_set_options", 00:06:22.807 "framework_get_governor", 00:06:22.807 "framework_get_scheduler", 00:06:22.807 "framework_set_scheduler", 00:06:22.807 "framework_get_reactors", 00:06:22.807 "thread_get_io_channels", 00:06:22.807 "thread_get_pollers", 00:06:22.807 "thread_get_stats", 00:06:22.807 "framework_monitor_context_switch", 00:06:22.807 "spdk_kill_instance", 00:06:22.807 "log_enable_timestamps", 00:06:22.807 "log_get_flags", 00:06:22.807 "log_clear_flag", 00:06:22.807 "log_set_flag", 00:06:22.807 "log_get_level", 00:06:22.807 "log_set_level", 00:06:22.807 "log_get_print_level", 00:06:22.807 "log_set_print_level", 00:06:22.807 "framework_enable_cpumask_locks", 00:06:22.807 "framework_disable_cpumask_locks", 00:06:22.807 "framework_wait_init", 00:06:22.807 "framework_start_init", 00:06:22.807 "scsi_get_devices", 00:06:22.807 "bdev_get_histogram", 00:06:22.807 "bdev_enable_histogram", 00:06:22.807 "bdev_set_qos_limit", 00:06:22.807 "bdev_set_qd_sampling_period", 00:06:22.807 "bdev_get_bdevs", 00:06:22.807 "bdev_reset_iostat", 00:06:22.807 "bdev_get_iostat", 00:06:22.807 "bdev_examine", 00:06:22.807 "bdev_wait_for_examine", 00:06:22.807 "bdev_set_options", 00:06:22.807 "accel_get_stats", 00:06:22.807 "accel_set_options", 00:06:22.807 "accel_set_driver", 00:06:22.807 "accel_crypto_key_destroy", 00:06:22.807 "accel_crypto_keys_get", 00:06:22.807 "accel_crypto_key_create", 00:06:22.807 "accel_assign_opc", 00:06:22.807 "accel_get_module_info", 00:06:22.807 "accel_get_opc_assignments", 00:06:22.807 "vmd_rescan", 00:06:22.807 "vmd_remove_device", 00:06:22.807 "vmd_enable", 00:06:22.807 "sock_get_default_impl", 00:06:22.807 "sock_set_default_impl", 00:06:22.807 "sock_impl_set_options", 00:06:22.807 "sock_impl_get_options", 00:06:22.807 "iobuf_get_stats", 00:06:22.807 "iobuf_set_options", 00:06:22.807 "keyring_get_keys", 00:06:22.807 "framework_get_pci_devices", 00:06:22.807 "framework_get_config", 00:06:22.807 "framework_get_subsystems", 00:06:22.807 "fsdev_set_opts", 00:06:22.807 "fsdev_get_opts", 00:06:22.807 "trace_get_info", 00:06:22.807 "trace_get_tpoint_group_mask", 00:06:22.807 "trace_disable_tpoint_group", 00:06:22.807 "trace_enable_tpoint_group", 00:06:22.807 "trace_clear_tpoint_mask", 00:06:22.807 "trace_set_tpoint_mask", 00:06:22.807 "notify_get_notifications", 00:06:22.807 "notify_get_types", 00:06:22.807 "spdk_get_version", 00:06:22.807 "rpc_get_methods" 00:06:22.807 ] 00:06:22.807 17:40:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:22.807 17:40:42 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.807 17:40:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.807 17:40:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:22.807 17:40:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71310 00:06:22.807 17:40:42 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 71310 ']' 00:06:22.807 17:40:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 71310 00:06:22.807 17:40:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:22.807 17:40:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.807 17:40:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71310 00:06:23.068 17:40:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:23.068 17:40:42 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:23.068 killing process with pid 71310 00:06:23.068 17:40:42 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71310' 00:06:23.068 17:40:42 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 71310 00:06:23.068 17:40:42 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 71310 00:06:23.329 00:06:23.329 real 0m1.632s 00:06:23.329 user 0m2.866s 00:06:23.329 sys 0m0.456s 00:06:23.329 ************************************ 00:06:23.329 END TEST spdkcli_tcp 00:06:23.329 ************************************ 00:06:23.329 17:40:43 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:23.329 17:40:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.329 17:40:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:23.329 17:40:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:23.329 17:40:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:23.329 17:40:43 -- common/autotest_common.sh@10 -- # set +x 00:06:23.329 ************************************ 00:06:23.329 START TEST dpdk_mem_utility 00:06:23.329 ************************************ 00:06:23.329 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:23.329 * Looking for test storage... 00:06:23.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:23.329 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.329 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.329 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.329 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.329 17:40:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:23.329 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.329 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.330 --rc genhtml_branch_coverage=1 00:06:23.330 --rc genhtml_function_coverage=1 00:06:23.330 --rc genhtml_legend=1 00:06:23.330 --rc geninfo_all_blocks=1 00:06:23.330 --rc geninfo_unexecuted_blocks=1 00:06:23.330 00:06:23.330 ' 00:06:23.330 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.330 --rc genhtml_branch_coverage=1 00:06:23.330 --rc genhtml_function_coverage=1 00:06:23.330 --rc genhtml_legend=1 00:06:23.330 --rc geninfo_all_blocks=1 00:06:23.330 --rc geninfo_unexecuted_blocks=1 00:06:23.330 00:06:23.330 ' 00:06:23.330 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.330 --rc genhtml_branch_coverage=1 00:06:23.330 --rc genhtml_function_coverage=1 00:06:23.330 --rc genhtml_legend=1 00:06:23.330 --rc geninfo_all_blocks=1 00:06:23.330 --rc geninfo_unexecuted_blocks=1 00:06:23.330 00:06:23.330 ' 00:06:23.330 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.330 --rc genhtml_branch_coverage=1 00:06:23.330 --rc genhtml_function_coverage=1 00:06:23.330 --rc genhtml_legend=1 00:06:23.330 --rc geninfo_all_blocks=1 00:06:23.330 --rc geninfo_unexecuted_blocks=1 00:06:23.330 00:06:23.330 ' 00:06:23.330 17:40:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:23.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.330 17:40:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71410 00:06:23.330 17:40:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71410 00:06:23.330 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 71410 ']' 00:06:23.330 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.330 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:23.330 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.330 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:23.330 17:40:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.330 17:40:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:23.590 [2024-11-05 17:40:43.385942] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:23.590 [2024-11-05 17:40:43.386074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71410 ] 00:06:23.590 [2024-11-05 17:40:43.514499] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.590 [2024-11-05 17:40:43.531558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.590 [2024-11-05 17:40:43.555243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.533 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.533 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:24.533 17:40:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:24.533 17:40:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:24.533 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.533 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.533 { 00:06:24.533 "filename": "/tmp/spdk_mem_dump.txt" 00:06:24.533 } 00:06:24.533 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.533 17:40:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:24.533 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:24.533 1 heaps totaling size 810.000000 MiB 00:06:24.533 size: 810.000000 MiB heap id: 0 00:06:24.533 end heaps---------- 00:06:24.533 9 mempools totaling size 595.772034 MiB 00:06:24.533 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:24.533 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:24.533 size: 92.545471 MiB name: bdev_io_71410 00:06:24.533 size: 50.003479 MiB name: msgpool_71410 00:06:24.533 size: 36.509338 MiB name: fsdev_io_71410 00:06:24.533 size: 21.763794 MiB name: PDU_Pool 00:06:24.533 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:24.533 size: 4.133484 MiB name: evtpool_71410 00:06:24.533 size: 0.026123 MiB name: Session_Pool 00:06:24.533 end mempools------- 00:06:24.533 6 memzones totaling size 4.142822 MiB 00:06:24.533 size: 1.000366 MiB name: RG_ring_0_71410 00:06:24.533 size: 1.000366 MiB name: RG_ring_1_71410 00:06:24.533 size: 1.000366 MiB name: RG_ring_4_71410 00:06:24.533 size: 1.000366 MiB name: RG_ring_5_71410 00:06:24.533 size: 0.125366 MiB name: RG_ring_2_71410 00:06:24.533 size: 0.015991 MiB name: RG_ring_3_71410 00:06:24.533 end memzones------- 00:06:24.533 17:40:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:24.533 heap id: 0 total size: 810.000000 MiB number of busy elements: 308 number of free elements: 15 00:06:24.533 list of free elements. size: 10.696960 MiB 00:06:24.533 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:24.533 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:24.533 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:24.533 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:24.533 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:24.533 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:24.533 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:24.533 element at address: 0x200000200000 with size: 0.600159 MiB 00:06:24.533 element at address: 0x20001a600000 with size: 0.567871 MiB 00:06:24.533 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:24.533 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:24.533 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:24.533 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:24.533 element at address: 0x200027a00000 with size: 0.396484 MiB 00:06:24.533 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:24.533 list of standard malloc elements. size: 199.384155 MiB 00:06:24.533 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:24.533 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:24.533 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:24.533 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:24.533 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:24.533 element at address: 0x2000003bbf00 with size: 0.257935 MiB 00:06:24.533 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:24.533 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:24.533 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:24.533 element at address: 0x2000002b9c40 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000003bbe40 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:24.533 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:24.533 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:24.533 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:24.533 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:24.533 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:24.533 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:24.534 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691600 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:24.534 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:24.535 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a65800 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a658c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6c4c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:24.535 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:24.535 list of memzone associated elements. size: 599.918884 MiB 00:06:24.535 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:24.535 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:24.535 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:24.535 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:24.535 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:24.535 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71410_0 00:06:24.535 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:24.535 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71410_0 00:06:24.535 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:24.535 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71410_0 00:06:24.535 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:24.535 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:24.535 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:24.535 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:24.535 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:24.535 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71410_0 00:06:24.535 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:24.535 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71410 00:06:24.535 element at address: 0x2000002b9d00 with size: 1.008118 MiB 00:06:24.535 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71410 00:06:24.535 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:24.535 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:24.535 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:24.535 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:24.535 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:24.535 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:24.535 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:24.535 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:24.535 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:24.535 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71410 00:06:24.535 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:24.535 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71410 00:06:24.535 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:24.535 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71410 00:06:24.535 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:24.535 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71410 00:06:24.535 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:24.535 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71410 00:06:24.535 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:24.535 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71410 00:06:24.535 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:24.535 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:24.535 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:24.535 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:24.535 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:24.535 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:24.535 element at address: 0x200000299a40 with size: 0.125488 MiB 00:06:24.535 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71410 00:06:24.535 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:24.535 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71410 00:06:24.535 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:24.535 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:24.535 element at address: 0x200027a65980 with size: 0.023743 MiB 00:06:24.535 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:24.535 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:24.535 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71410 00:06:24.535 element at address: 0x200027a6bac0 with size: 0.002441 MiB 00:06:24.535 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:24.535 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:24.535 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71410 00:06:24.535 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:24.535 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71410 00:06:24.535 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:24.535 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71410 00:06:24.535 element at address: 0x200027a6c580 with size: 0.000305 MiB 00:06:24.535 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:24.535 17:40:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:24.535 17:40:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71410 00:06:24.535 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 71410 ']' 00:06:24.536 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 71410 00:06:24.536 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:24.536 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:24.536 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71410 00:06:24.536 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:24.536 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:24.536 killing process with pid 71410 00:06:24.536 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71410' 00:06:24.536 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 71410 00:06:24.536 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 71410 00:06:24.813 00:06:24.813 real 0m1.451s 00:06:24.813 user 0m1.409s 00:06:24.813 sys 0m0.409s 00:06:24.813 ************************************ 00:06:24.813 END TEST dpdk_mem_utility 00:06:24.813 ************************************ 00:06:24.813 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.813 17:40:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.813 17:40:44 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:24.813 17:40:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:24.813 17:40:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.813 17:40:44 -- common/autotest_common.sh@10 -- # set +x 00:06:24.813 ************************************ 00:06:24.813 START TEST event 00:06:24.813 ************************************ 00:06:24.813 17:40:44 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:24.813 * Looking for test storage... 00:06:24.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:24.813 17:40:44 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:24.813 17:40:44 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:24.813 17:40:44 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:24.813 17:40:44 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:24.813 17:40:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.813 17:40:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.813 17:40:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.813 17:40:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.813 17:40:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.813 17:40:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.813 17:40:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.813 17:40:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.813 17:40:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.813 17:40:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.813 17:40:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.813 17:40:44 event -- scripts/common.sh@344 -- # case "$op" in 00:06:24.813 17:40:44 event -- scripts/common.sh@345 -- # : 1 00:06:24.813 17:40:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.813 17:40:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.813 17:40:44 event -- scripts/common.sh@365 -- # decimal 1 00:06:24.813 17:40:44 event -- scripts/common.sh@353 -- # local d=1 00:06:24.813 17:40:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.813 17:40:44 event -- scripts/common.sh@355 -- # echo 1 00:06:24.813 17:40:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.813 17:40:44 event -- scripts/common.sh@366 -- # decimal 2 00:06:24.813 17:40:44 event -- scripts/common.sh@353 -- # local d=2 00:06:24.813 17:40:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.813 17:40:44 event -- scripts/common.sh@355 -- # echo 2 00:06:24.813 17:40:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.813 17:40:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.813 17:40:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.813 17:40:44 event -- scripts/common.sh@368 -- # return 0 00:06:24.813 17:40:44 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.813 17:40:44 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.813 --rc genhtml_branch_coverage=1 00:06:24.813 --rc genhtml_function_coverage=1 00:06:24.813 --rc genhtml_legend=1 00:06:24.813 --rc geninfo_all_blocks=1 00:06:24.813 --rc geninfo_unexecuted_blocks=1 00:06:24.813 00:06:24.813 ' 00:06:24.813 17:40:44 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.813 --rc genhtml_branch_coverage=1 00:06:24.813 --rc genhtml_function_coverage=1 00:06:24.813 --rc genhtml_legend=1 00:06:24.813 --rc geninfo_all_blocks=1 00:06:24.813 --rc geninfo_unexecuted_blocks=1 00:06:24.813 00:06:24.813 ' 00:06:24.813 17:40:44 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.813 --rc genhtml_branch_coverage=1 00:06:24.813 --rc genhtml_function_coverage=1 00:06:24.813 --rc genhtml_legend=1 00:06:24.813 --rc geninfo_all_blocks=1 00:06:24.813 --rc geninfo_unexecuted_blocks=1 00:06:24.813 00:06:24.813 ' 00:06:24.813 17:40:44 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:24.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.813 --rc genhtml_branch_coverage=1 00:06:24.813 --rc genhtml_function_coverage=1 00:06:24.813 --rc genhtml_legend=1 00:06:24.813 --rc geninfo_all_blocks=1 00:06:24.813 --rc geninfo_unexecuted_blocks=1 00:06:24.813 00:06:24.813 ' 00:06:24.813 17:40:44 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:24.813 17:40:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:24.813 17:40:44 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:24.814 17:40:44 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:24.814 17:40:44 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.814 17:40:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.073 ************************************ 00:06:25.073 START TEST event_perf 00:06:25.073 ************************************ 00:06:25.073 17:40:44 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.073 Running I/O for 1 seconds...[2024-11-05 17:40:44.837554] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:25.073 [2024-11-05 17:40:44.837674] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71491 ] 00:06:25.073 [2024-11-05 17:40:44.966849] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.073 [2024-11-05 17:40:44.989275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.073 [2024-11-05 17:40:45.016229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.073 [2024-11-05 17:40:45.016448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.073 [2024-11-05 17:40:45.016704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.073 Running I/O for 1 seconds...[2024-11-05 17:40:45.016789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.454 00:06:26.455 lcore 0: 164846 00:06:26.455 lcore 1: 164849 00:06:26.455 lcore 2: 164851 00:06:26.455 lcore 3: 164850 00:06:26.455 done. 00:06:26.455 00:06:26.455 real 0m1.259s 00:06:26.455 user 0m4.058s 00:06:26.455 sys 0m0.082s 00:06:26.455 17:40:46 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.455 17:40:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.455 ************************************ 00:06:26.455 END TEST event_perf 00:06:26.455 ************************************ 00:06:26.455 17:40:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:26.455 17:40:46 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:26.455 17:40:46 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.455 17:40:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.455 ************************************ 00:06:26.455 START TEST event_reactor 00:06:26.455 ************************************ 00:06:26.455 17:40:46 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:26.455 [2024-11-05 17:40:46.167623] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:26.455 [2024-11-05 17:40:46.167769] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71525 ] 00:06:26.455 [2024-11-05 17:40:46.298751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.455 [2024-11-05 17:40:46.329333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.455 [2024-11-05 17:40:46.368856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.842 test_start 00:06:27.842 oneshot 00:06:27.842 tick 100 00:06:27.842 tick 100 00:06:27.842 tick 250 00:06:27.842 tick 100 00:06:27.842 tick 100 00:06:27.842 tick 100 00:06:27.842 tick 250 00:06:27.842 tick 500 00:06:27.842 tick 100 00:06:27.842 tick 100 00:06:27.842 tick 250 00:06:27.842 tick 100 00:06:27.842 tick 100 00:06:27.842 test_end 00:06:27.842 00:06:27.842 real 0m1.311s 00:06:27.842 user 0m1.110s 00:06:27.842 sys 0m0.090s 00:06:27.842 17:40:47 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.842 17:40:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:27.842 ************************************ 00:06:27.842 END TEST event_reactor 00:06:27.842 ************************************ 00:06:27.842 17:40:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:27.842 17:40:47 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:27.842 17:40:47 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.842 17:40:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.842 ************************************ 00:06:27.842 START TEST event_reactor_perf 00:06:27.842 ************************************ 00:06:27.842 17:40:47 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:27.843 [2024-11-05 17:40:47.553970] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:27.843 [2024-11-05 17:40:47.554158] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71561 ] 00:06:27.843 [2024-11-05 17:40:47.689640] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.843 [2024-11-05 17:40:47.718709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.843 [2024-11-05 17:40:47.744440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.228 test_start 00:06:29.228 test_end 00:06:29.228 Performance: 315356 events per second 00:06:29.228 00:06:29.228 real 0m1.304s 00:06:29.228 user 0m1.112s 00:06:29.228 sys 0m0.082s 00:06:29.228 17:40:48 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.228 17:40:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.228 ************************************ 00:06:29.228 END TEST event_reactor_perf 00:06:29.228 ************************************ 00:06:29.228 17:40:48 event -- event/event.sh@49 -- # uname -s 00:06:29.228 17:40:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:29.228 17:40:48 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:29.228 17:40:48 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.228 17:40:48 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.228 17:40:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.228 ************************************ 00:06:29.228 START TEST event_scheduler 00:06:29.228 ************************************ 00:06:29.228 17:40:48 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:29.228 * Looking for test storage... 00:06:29.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:29.228 17:40:48 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:29.228 17:40:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:29.228 17:40:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:29.228 17:40:49 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.228 17:40:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:29.228 17:40:49 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.228 17:40:49 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:29.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.228 --rc genhtml_branch_coverage=1 00:06:29.228 --rc genhtml_function_coverage=1 00:06:29.228 --rc genhtml_legend=1 00:06:29.228 --rc geninfo_all_blocks=1 00:06:29.228 --rc geninfo_unexecuted_blocks=1 00:06:29.228 00:06:29.228 ' 00:06:29.228 17:40:49 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:29.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.228 --rc genhtml_branch_coverage=1 00:06:29.228 --rc genhtml_function_coverage=1 00:06:29.228 --rc genhtml_legend=1 00:06:29.228 --rc geninfo_all_blocks=1 00:06:29.228 --rc geninfo_unexecuted_blocks=1 00:06:29.228 00:06:29.228 ' 00:06:29.228 17:40:49 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:29.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.228 --rc genhtml_branch_coverage=1 00:06:29.228 --rc genhtml_function_coverage=1 00:06:29.228 --rc genhtml_legend=1 00:06:29.228 --rc geninfo_all_blocks=1 00:06:29.228 --rc geninfo_unexecuted_blocks=1 00:06:29.228 00:06:29.228 ' 00:06:29.228 17:40:49 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:29.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.228 --rc genhtml_branch_coverage=1 00:06:29.228 --rc genhtml_function_coverage=1 00:06:29.228 --rc genhtml_legend=1 00:06:29.228 --rc geninfo_all_blocks=1 00:06:29.228 --rc geninfo_unexecuted_blocks=1 00:06:29.228 00:06:29.228 ' 00:06:29.228 17:40:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:29.229 17:40:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71632 00:06:29.229 17:40:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.229 17:40:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71632 00:06:29.229 17:40:49 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 71632 ']' 00:06:29.229 17:40:49 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.229 17:40:49 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:29.229 17:40:49 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.229 17:40:49 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:29.229 17:40:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:29.229 17:40:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:29.229 [2024-11-05 17:40:49.143169] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:29.229 [2024-11-05 17:40:49.143341] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71632 ] 00:06:29.490 [2024-11-05 17:40:49.281831] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.490 [2024-11-05 17:40:49.306275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.490 [2024-11-05 17:40:49.339032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.490 [2024-11-05 17:40:49.339380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.490 [2024-11-05 17:40:49.339830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.490 [2024-11-05 17:40:49.339852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.116 17:40:50 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:30.116 17:40:50 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:30.116 17:40:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:30.116 17:40:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.116 17:40:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.116 POWER: acpi-cpufreq driver is not supported 00:06:30.116 POWER: intel_pstate driver is not supported 00:06:30.116 POWER: amd-pstate driver is not supported 00:06:30.116 POWER: cppc_cpufreq driver is not supported 00:06:30.116 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:30.116 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:30.116 POWER: Unable to set Power Management Environment for lcore 0 00:06:30.116 [2024-11-05 17:40:50.009816] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:30.116 [2024-11-05 17:40:50.009837] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:30.116 [2024-11-05 17:40:50.009849] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:30.116 [2024-11-05 17:40:50.009879] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:30.116 [2024-11-05 17:40:50.009889] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:30.116 [2024-11-05 17:40:50.009897] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:30.116 17:40:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.116 17:40:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:30.116 17:40:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.116 17:40:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.116 [2024-11-05 17:40:50.087815] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:30.116 17:40:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.116 17:40:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:30.116 17:40:50 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:30.116 17:40:50 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.116 17:40:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.116 ************************************ 00:06:30.116 START TEST scheduler_create_thread 00:06:30.116 ************************************ 00:06:30.116 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:30.116 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:30.116 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.116 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.378 2 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.378 3 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.378 4 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.378 5 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.378 6 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.378 7 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.378 8 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.378 9 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.378 10 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.378 17:40:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.319 17:40:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.319 17:40:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:31.319 17:40:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.319 17:40:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.694 17:40:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.694 17:40:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:32.694 17:40:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:32.694 17:40:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.694 17:40:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.628 17:40:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.628 00:06:33.628 real 0m3.376s 00:06:33.628 user 0m0.012s 00:06:33.628 sys 0m0.009s 00:06:33.628 17:40:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.628 ************************************ 00:06:33.628 END TEST scheduler_create_thread 00:06:33.628 ************************************ 00:06:33.628 17:40:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.628 17:40:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:33.628 17:40:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71632 00:06:33.628 17:40:53 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 71632 ']' 00:06:33.628 17:40:53 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 71632 00:06:33.628 17:40:53 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:33.628 17:40:53 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:33.628 17:40:53 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71632 00:06:33.628 killing process with pid 71632 00:06:33.628 17:40:53 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:33.628 17:40:53 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:33.628 17:40:53 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71632' 00:06:33.628 17:40:53 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 71632 00:06:33.628 17:40:53 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 71632 00:06:33.886 [2024-11-05 17:40:53.859340] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:34.147 00:06:34.147 real 0m5.136s 00:06:34.147 user 0m10.189s 00:06:34.147 sys 0m0.413s 00:06:34.147 17:40:54 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.147 17:40:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.147 ************************************ 00:06:34.147 END TEST event_scheduler 00:06:34.147 ************************************ 00:06:34.147 17:40:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:34.147 17:40:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:34.147 17:40:54 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:34.147 17:40:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.147 17:40:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.147 ************************************ 00:06:34.147 START TEST app_repeat 00:06:34.147 ************************************ 00:06:34.147 17:40:54 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71738 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.147 Process app_repeat pid: 71738 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71738' 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.147 spdk_app_start Round 0 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71738 /var/tmp/spdk-nbd.sock 00:06:34.147 17:40:54 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:34.147 17:40:54 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71738 ']' 00:06:34.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.147 17:40:54 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.147 17:40:54 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.147 17:40:54 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.147 17:40:54 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.147 17:40:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.408 [2024-11-05 17:40:54.155956] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:34.408 [2024-11-05 17:40:54.156144] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71738 ] 00:06:34.408 [2024-11-05 17:40:54.288228] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:34.408 [2024-11-05 17:40:54.315968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.408 [2024-11-05 17:40:54.344471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.408 [2024-11-05 17:40:54.344491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.352 17:40:55 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:35.352 17:40:55 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:35.352 17:40:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.352 Malloc0 00:06:35.352 17:40:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.645 Malloc1 00:06:35.645 17:40:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.645 17:40:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.906 /dev/nbd0 00:06:35.906 17:40:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.906 17:40:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.906 1+0 records in 00:06:35.906 1+0 records out 00:06:35.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655517 s, 6.2 MB/s 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:35.906 17:40:55 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:35.906 17:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.906 17:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.906 17:40:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.166 /dev/nbd1 00:06:36.166 17:40:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.166 17:40:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.166 1+0 records in 00:06:36.166 1+0 records out 00:06:36.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467923 s, 8.8 MB/s 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:36.166 17:40:55 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:36.166 17:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.166 17:40:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.166 17:40:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.166 17:40:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.166 17:40:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.427 { 00:06:36.427 "nbd_device": "/dev/nbd0", 00:06:36.427 "bdev_name": "Malloc0" 00:06:36.427 }, 00:06:36.427 { 00:06:36.427 "nbd_device": "/dev/nbd1", 00:06:36.427 "bdev_name": "Malloc1" 00:06:36.427 } 00:06:36.427 ]' 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.427 { 00:06:36.427 "nbd_device": "/dev/nbd0", 00:06:36.427 "bdev_name": "Malloc0" 00:06:36.427 }, 00:06:36.427 { 00:06:36.427 "nbd_device": "/dev/nbd1", 00:06:36.427 "bdev_name": "Malloc1" 00:06:36.427 } 00:06:36.427 ]' 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.427 /dev/nbd1' 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.427 /dev/nbd1' 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.427 17:40:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.428 256+0 records in 00:06:36.428 256+0 records out 00:06:36.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525653 s, 199 MB/s 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.428 256+0 records in 00:06:36.428 256+0 records out 00:06:36.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204807 s, 51.2 MB/s 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.428 256+0 records in 00:06:36.428 256+0 records out 00:06:36.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0857359 s, 12.2 MB/s 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.428 17:40:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.689 17:40:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.689 17:40:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.689 17:40:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.689 17:40:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.689 17:40:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.689 17:40:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.689 17:40:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.689 17:40:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.689 17:40:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.689 17:40:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.948 17:40:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.948 17:40:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.948 17:40:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.948 17:40:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.948 17:40:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.948 17:40:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.948 17:40:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.948 17:40:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.948 17:40:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.948 17:40:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.948 17:40:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.206 17:40:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.206 17:40:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.467 17:40:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.467 [2024-11-05 17:40:57.454444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.725 [2024-11-05 17:40:57.479274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.725 [2024-11-05 17:40:57.479380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.725 [2024-11-05 17:40:57.523659] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.725 [2024-11-05 17:40:57.523733] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.005 spdk_app_start Round 1 00:06:41.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.006 17:41:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:41.006 17:41:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:41.006 17:41:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71738 /var/tmp/spdk-nbd.sock 00:06:41.006 17:41:00 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71738 ']' 00:06:41.006 17:41:00 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.006 17:41:00 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:41.006 17:41:00 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.006 17:41:00 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:41.006 17:41:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.006 17:41:00 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.006 17:41:00 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:41.006 17:41:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.006 Malloc0 00:06:41.006 17:41:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.006 Malloc1 00:06:41.006 17:41:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.006 17:41:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.264 /dev/nbd0 00:06:41.264 17:41:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.264 17:41:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.264 1+0 records in 00:06:41.264 1+0 records out 00:06:41.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386367 s, 10.6 MB/s 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:41.264 17:41:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:41.264 17:41:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.264 17:41:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.264 17:41:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.523 /dev/nbd1 00:06:41.523 17:41:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.523 17:41:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.523 1+0 records in 00:06:41.523 1+0 records out 00:06:41.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186585 s, 22.0 MB/s 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:41.523 17:41:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:41.523 17:41:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.523 17:41:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.523 17:41:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.523 17:41:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.523 17:41:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.781 { 00:06:41.781 "nbd_device": "/dev/nbd0", 00:06:41.781 "bdev_name": "Malloc0" 00:06:41.781 }, 00:06:41.781 { 00:06:41.781 "nbd_device": "/dev/nbd1", 00:06:41.781 "bdev_name": "Malloc1" 00:06:41.781 } 00:06:41.781 ]' 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.781 { 00:06:41.781 "nbd_device": "/dev/nbd0", 00:06:41.781 "bdev_name": "Malloc0" 00:06:41.781 }, 00:06:41.781 { 00:06:41.781 "nbd_device": "/dev/nbd1", 00:06:41.781 "bdev_name": "Malloc1" 00:06:41.781 } 00:06:41.781 ]' 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.781 /dev/nbd1' 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.781 /dev/nbd1' 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.781 256+0 records in 00:06:41.781 256+0 records out 00:06:41.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0095671 s, 110 MB/s 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.781 256+0 records in 00:06:41.781 256+0 records out 00:06:41.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017124 s, 61.2 MB/s 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.781 17:41:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.781 256+0 records in 00:06:41.781 256+0 records out 00:06:41.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191205 s, 54.8 MB/s 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.038 17:41:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.038 17:41:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.038 17:41:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.038 17:41:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.038 17:41:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.038 17:41:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.038 17:41:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.038 17:41:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.038 17:41:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.038 17:41:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.038 17:41:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.297 17:41:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.297 17:41:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.297 17:41:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.297 17:41:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.297 17:41:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.297 17:41:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.297 17:41:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.297 17:41:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.297 17:41:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.297 17:41:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.297 17:41:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.555 17:41:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.555 17:41:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.814 17:41:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.814 [2024-11-05 17:41:02.783157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.814 [2024-11-05 17:41:02.803329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.814 [2024-11-05 17:41:02.803413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.073 [2024-11-05 17:41:02.842422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.073 [2024-11-05 17:41:02.842615] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.383 spdk_app_start Round 2 00:06:46.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.383 17:41:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.383 17:41:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:46.383 17:41:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71738 /var/tmp/spdk-nbd.sock 00:06:46.383 17:41:05 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71738 ']' 00:06:46.383 17:41:05 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.383 17:41:05 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:46.383 17:41:05 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.383 17:41:05 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:46.383 17:41:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.383 17:41:05 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.383 17:41:05 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:46.383 17:41:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.383 Malloc0 00:06:46.383 17:41:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.383 Malloc1 00:06:46.383 17:41:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.383 17:41:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.641 /dev/nbd0 00:06:46.641 17:41:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.641 17:41:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.641 1+0 records in 00:06:46.641 1+0 records out 00:06:46.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442217 s, 9.3 MB/s 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:46.641 17:41:06 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:46.641 17:41:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.641 17:41:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.641 17:41:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.899 /dev/nbd1 00:06:46.899 17:41:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.899 17:41:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.899 17:41:06 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:46.899 17:41:06 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:46.899 17:41:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:46.899 17:41:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:46.899 17:41:06 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:46.899 17:41:06 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:46.899 17:41:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:46.899 17:41:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:46.899 17:41:06 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.899 1+0 records in 00:06:46.899 1+0 records out 00:06:46.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187432 s, 21.9 MB/s 00:06:46.899 17:41:06 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.900 17:41:06 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:46.900 17:41:06 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.900 17:41:06 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:46.900 17:41:06 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:46.900 17:41:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.900 17:41:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.900 17:41:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.900 17:41:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.900 17:41:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.158 { 00:06:47.158 "nbd_device": "/dev/nbd0", 00:06:47.158 "bdev_name": "Malloc0" 00:06:47.158 }, 00:06:47.158 { 00:06:47.158 "nbd_device": "/dev/nbd1", 00:06:47.158 "bdev_name": "Malloc1" 00:06:47.158 } 00:06:47.158 ]' 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.158 { 00:06:47.158 "nbd_device": "/dev/nbd0", 00:06:47.158 "bdev_name": "Malloc0" 00:06:47.158 }, 00:06:47.158 { 00:06:47.158 "nbd_device": "/dev/nbd1", 00:06:47.158 "bdev_name": "Malloc1" 00:06:47.158 } 00:06:47.158 ]' 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.158 /dev/nbd1' 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.158 /dev/nbd1' 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.158 17:41:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.159 256+0 records in 00:06:47.159 256+0 records out 00:06:47.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00918042 s, 114 MB/s 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.159 256+0 records in 00:06:47.159 256+0 records out 00:06:47.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189293 s, 55.4 MB/s 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.159 256+0 records in 00:06:47.159 256+0 records out 00:06:47.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181248 s, 57.9 MB/s 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.159 17:41:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.417 17:41:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.417 17:41:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.417 17:41:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.417 17:41:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.417 17:41:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.417 17:41:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.417 17:41:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.417 17:41:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.417 17:41:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.417 17:41:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.676 17:41:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.676 17:41:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.676 17:41:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.676 17:41:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.676 17:41:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.676 17:41:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.676 17:41:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.676 17:41:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.676 17:41:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.676 17:41:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.676 17:41:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.934 17:41:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.934 17:41:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.192 17:41:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.192 [2024-11-05 17:41:08.109620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.192 [2024-11-05 17:41:08.130084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.192 [2024-11-05 17:41:08.130102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.192 [2024-11-05 17:41:08.169312] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.192 [2024-11-05 17:41:08.169378] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.476 17:41:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71738 /var/tmp/spdk-nbd.sock 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71738 ']' 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:51.476 17:41:11 event.app_repeat -- event/event.sh@39 -- # killprocess 71738 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 71738 ']' 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 71738 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71738 00:06:51.476 killing process with pid 71738 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71738' 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@971 -- # kill 71738 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@976 -- # wait 71738 00:06:51.476 spdk_app_start is called in Round 0. 00:06:51.476 Shutdown signal received, stop current app iteration 00:06:51.476 Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 reinitialization... 00:06:51.476 spdk_app_start is called in Round 1. 00:06:51.476 Shutdown signal received, stop current app iteration 00:06:51.476 Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 reinitialization... 00:06:51.476 spdk_app_start is called in Round 2. 00:06:51.476 Shutdown signal received, stop current app iteration 00:06:51.476 Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 reinitialization... 00:06:51.476 spdk_app_start is called in Round 3. 00:06:51.476 Shutdown signal received, stop current app iteration 00:06:51.476 ************************************ 00:06:51.476 END TEST app_repeat 00:06:51.476 ************************************ 00:06:51.476 17:41:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:51.476 17:41:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:51.476 00:06:51.476 real 0m17.256s 00:06:51.476 user 0m38.435s 00:06:51.476 sys 0m2.277s 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.476 17:41:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.476 17:41:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:51.476 17:41:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:51.476 17:41:11 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:51.476 17:41:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.476 17:41:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.476 ************************************ 00:06:51.476 START TEST cpu_locks 00:06:51.476 ************************************ 00:06:51.476 17:41:11 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:51.771 * Looking for test storage... 00:06:51.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.772 17:41:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:51.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.772 --rc genhtml_branch_coverage=1 00:06:51.772 --rc genhtml_function_coverage=1 00:06:51.772 --rc genhtml_legend=1 00:06:51.772 --rc geninfo_all_blocks=1 00:06:51.772 --rc geninfo_unexecuted_blocks=1 00:06:51.772 00:06:51.772 ' 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:51.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.772 --rc genhtml_branch_coverage=1 00:06:51.772 --rc genhtml_function_coverage=1 00:06:51.772 --rc genhtml_legend=1 00:06:51.772 --rc geninfo_all_blocks=1 00:06:51.772 --rc geninfo_unexecuted_blocks=1 00:06:51.772 00:06:51.772 ' 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:51.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.772 --rc genhtml_branch_coverage=1 00:06:51.772 --rc genhtml_function_coverage=1 00:06:51.772 --rc genhtml_legend=1 00:06:51.772 --rc geninfo_all_blocks=1 00:06:51.772 --rc geninfo_unexecuted_blocks=1 00:06:51.772 00:06:51.772 ' 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:51.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.772 --rc genhtml_branch_coverage=1 00:06:51.772 --rc genhtml_function_coverage=1 00:06:51.772 --rc genhtml_legend=1 00:06:51.772 --rc geninfo_all_blocks=1 00:06:51.772 --rc geninfo_unexecuted_blocks=1 00:06:51.772 00:06:51.772 ' 00:06:51.772 17:41:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:51.772 17:41:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:51.772 17:41:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:51.772 17:41:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.772 17:41:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.772 ************************************ 00:06:51.772 START TEST default_locks 00:06:51.772 ************************************ 00:06:51.772 17:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:51.772 17:41:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72163 00:06:51.772 17:41:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72163 00:06:51.772 17:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 72163 ']' 00:06:51.772 17:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.772 17:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:51.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.772 17:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.772 17:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:51.772 17:41:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.772 17:41:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.772 [2024-11-05 17:41:11.670496] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:51.772 [2024-11-05 17:41:11.670640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72163 ] 00:06:52.034 [2024-11-05 17:41:11.803718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.034 [2024-11-05 17:41:11.834167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.034 [2024-11-05 17:41:11.874485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.607 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.607 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:52.607 17:41:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72163 00:06:52.607 17:41:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.607 17:41:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72163 00:06:52.869 17:41:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72163 00:06:52.869 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 72163 ']' 00:06:52.869 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 72163 00:06:52.869 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:52.869 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:52.869 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72163 00:06:52.869 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:52.869 killing process with pid 72163 00:06:52.869 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:52.869 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72163' 00:06:52.869 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 72163 00:06:52.869 17:41:12 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 72163 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72163 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72163 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 72163 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 72163 ']' 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.442 ERROR: process (pid: 72163) is no longer running 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.442 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (72163) - No such process 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.442 ************************************ 00:06:53.442 END TEST default_locks 00:06:53.442 ************************************ 00:06:53.442 00:06:53.442 real 0m1.667s 00:06:53.442 user 0m1.585s 00:06:53.442 sys 0m0.551s 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.442 17:41:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.442 17:41:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:53.442 17:41:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:53.442 17:41:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.442 17:41:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.442 ************************************ 00:06:53.442 START TEST default_locks_via_rpc 00:06:53.442 ************************************ 00:06:53.442 17:41:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:53.442 17:41:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72210 00:06:53.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.442 17:41:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72210 00:06:53.442 17:41:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72210 ']' 00:06:53.442 17:41:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.442 17:41:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.442 17:41:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.442 17:41:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.442 17:41:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.442 17:41:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.442 [2024-11-05 17:41:13.408976] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:53.442 [2024-11-05 17:41:13.409136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72210 ] 00:06:53.704 [2024-11-05 17:41:13.543372] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.704 [2024-11-05 17:41:13.574160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.704 [2024-11-05 17:41:13.601472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72210 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72210 00:06:54.277 17:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.538 17:41:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72210 00:06:54.538 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 72210 ']' 00:06:54.538 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 72210 00:06:54.538 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:54.538 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:54.538 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72210 00:06:54.538 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:54.538 killing process with pid 72210 00:06:54.538 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:54.538 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72210' 00:06:54.538 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 72210 00:06:54.538 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 72210 00:06:55.111 00:06:55.111 real 0m1.658s 00:06:55.111 user 0m1.664s 00:06:55.111 sys 0m0.460s 00:06:55.111 ************************************ 00:06:55.111 END TEST default_locks_via_rpc 00:06:55.111 ************************************ 00:06:55.111 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.111 17:41:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.111 17:41:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:55.111 17:41:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:55.111 17:41:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.111 17:41:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.111 ************************************ 00:06:55.111 START TEST non_locking_app_on_locked_coremask 00:06:55.111 ************************************ 00:06:55.111 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:55.111 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72257 00:06:55.111 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72257 /var/tmp/spdk.sock 00:06:55.111 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72257 ']' 00:06:55.111 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.111 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.111 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.111 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.111 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.111 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.372 [2024-11-05 17:41:15.117763] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:55.372 [2024-11-05 17:41:15.117888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72257 ] 00:06:55.372 [2024-11-05 17:41:15.248484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.372 [2024-11-05 17:41:15.271481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.372 [2024-11-05 17:41:15.295996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.316 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.316 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:56.316 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72273 00:06:56.316 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72273 /var/tmp/spdk2.sock 00:06:56.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.316 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72273 ']' 00:06:56.316 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.316 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.316 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.316 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:56.316 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.316 17:41:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.316 [2024-11-05 17:41:16.023921] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:56.316 [2024-11-05 17:41:16.024040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72273 ] 00:06:56.316 [2024-11-05 17:41:16.152504] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.316 [2024-11-05 17:41:16.199357] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.316 [2024-11-05 17:41:16.199429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.316 [2024-11-05 17:41:16.248628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.888 17:41:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.888 17:41:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:56.888 17:41:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72257 00:06:56.888 17:41:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72257 00:06:56.888 17:41:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.150 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72257 00:06:57.150 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 72257 ']' 00:06:57.150 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 72257 00:06:57.150 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:57.150 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.150 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72257 00:06:57.410 killing process with pid 72257 00:06:57.410 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:57.410 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:57.410 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72257' 00:06:57.410 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 72257 00:06:57.410 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 72257 00:06:57.982 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72273 00:06:57.982 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 72273 ']' 00:06:57.982 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 72273 00:06:57.982 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:57.982 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.982 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72273 00:06:57.982 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:57.982 killing process with pid 72273 00:06:57.982 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:57.982 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72273' 00:06:57.982 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 72273 00:06:57.982 17:41:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 72273 00:06:58.243 00:06:58.243 real 0m3.009s 00:06:58.243 user 0m3.196s 00:06:58.243 sys 0m0.885s 00:06:58.243 17:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:58.243 ************************************ 00:06:58.243 END TEST non_locking_app_on_locked_coremask 00:06:58.243 ************************************ 00:06:58.243 17:41:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.243 17:41:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:58.243 17:41:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:58.243 17:41:18 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.243 17:41:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.243 ************************************ 00:06:58.243 START TEST locking_app_on_unlocked_coremask 00:06:58.243 ************************************ 00:06:58.243 17:41:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:58.243 17:41:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72331 00:06:58.243 17:41:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72331 /var/tmp/spdk.sock 00:06:58.243 17:41:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72331 ']' 00:06:58.243 17:41:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.243 17:41:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:58.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.243 17:41:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.243 17:41:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:58.243 17:41:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.243 17:41:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:58.243 [2024-11-05 17:41:18.192831] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:58.243 [2024-11-05 17:41:18.192956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72331 ] 00:06:58.504 [2024-11-05 17:41:18.322325] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:58.504 [2024-11-05 17:41:18.350074] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.504 [2024-11-05 17:41:18.350112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.504 [2024-11-05 17:41:18.375311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.076 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:59.076 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:59.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.076 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72347 00:06:59.076 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72347 /var/tmp/spdk2.sock 00:06:59.076 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72347 ']' 00:06:59.076 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.076 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.076 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:59.076 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.076 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.076 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.336 [2024-11-05 17:41:19.108039] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:06:59.336 [2024-11-05 17:41:19.108195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72347 ] 00:06:59.336 [2024-11-05 17:41:19.239684] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.336 [2024-11-05 17:41:19.282555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.596 [2024-11-05 17:41:19.336893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.167 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.167 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:00.167 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72347 00:07:00.167 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72347 00:07:00.167 17:41:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.429 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72331 00:07:00.429 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 72331 ']' 00:07:00.429 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 72331 00:07:00.429 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:00.429 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:00.430 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72331 00:07:00.430 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:00.430 killing process with pid 72331 00:07:00.430 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:00.430 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72331' 00:07:00.430 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 72331 00:07:00.430 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 72331 00:07:01.002 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72347 00:07:01.002 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 72347 ']' 00:07:01.002 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 72347 00:07:01.002 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:01.002 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:01.002 17:41:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72347 00:07:01.263 17:41:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:01.263 17:41:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:01.263 killing process with pid 72347 00:07:01.263 17:41:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72347' 00:07:01.263 17:41:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 72347 00:07:01.263 17:41:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 72347 00:07:01.523 00:07:01.523 real 0m3.213s 00:07:01.523 user 0m3.427s 00:07:01.523 sys 0m0.888s 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.523 ************************************ 00:07:01.523 END TEST locking_app_on_unlocked_coremask 00:07:01.523 ************************************ 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.523 17:41:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:01.523 17:41:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.523 17:41:21 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.523 17:41:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.523 ************************************ 00:07:01.523 START TEST locking_app_on_locked_coremask 00:07:01.523 ************************************ 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72405 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72405 /var/tmp/spdk.sock 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72405 ']' 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.523 17:41:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.523 [2024-11-05 17:41:21.472573] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:01.523 [2024-11-05 17:41:21.472695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72405 ] 00:07:01.781 [2024-11-05 17:41:21.604362] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:01.781 [2024-11-05 17:41:21.627334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.781 [2024-11-05 17:41:21.651599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72421 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72421 /var/tmp/spdk2.sock 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72421 /var/tmp/spdk2.sock 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72421 /var/tmp/spdk2.sock 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72421 ']' 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:02.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:02.346 17:41:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.604 [2024-11-05 17:41:22.383909] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:02.604 [2024-11-05 17:41:22.384029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72421 ] 00:07:02.604 [2024-11-05 17:41:22.512058] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:02.604 [2024-11-05 17:41:22.554933] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72405 has claimed it. 00:07:02.604 [2024-11-05 17:41:22.555000] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.172 ERROR: process (pid: 72421) is no longer running 00:07:03.172 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (72421) - No such process 00:07:03.172 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.172 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:03.172 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:03.172 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.172 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.172 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.172 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72405 00:07:03.172 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.172 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72405 00:07:03.432 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72405 00:07:03.432 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 72405 ']' 00:07:03.432 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 72405 00:07:03.432 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:03.432 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:03.432 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72405 00:07:03.432 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:03.432 killing process with pid 72405 00:07:03.432 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:03.432 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72405' 00:07:03.432 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 72405 00:07:03.432 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 72405 00:07:04.005 00:07:04.005 real 0m2.334s 00:07:04.005 user 0m2.524s 00:07:04.005 sys 0m0.574s 00:07:04.005 ************************************ 00:07:04.005 END TEST locking_app_on_locked_coremask 00:07:04.005 ************************************ 00:07:04.005 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.005 17:41:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.005 17:41:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:04.005 17:41:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:04.005 17:41:23 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.005 17:41:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.005 ************************************ 00:07:04.005 START TEST locking_overlapped_coremask 00:07:04.005 ************************************ 00:07:04.005 17:41:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:04.005 17:41:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72474 00:07:04.005 17:41:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72474 /var/tmp/spdk.sock 00:07:04.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.005 17:41:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 72474 ']' 00:07:04.005 17:41:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.005 17:41:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:04.005 17:41:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.005 17:41:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:04.005 17:41:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:04.005 17:41:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.005 [2024-11-05 17:41:23.888716] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:04.005 [2024-11-05 17:41:23.888875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72474 ] 00:07:04.266 [2024-11-05 17:41:24.025541] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.266 [2024-11-05 17:41:24.054147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.266 [2024-11-05 17:41:24.101426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.266 [2024-11-05 17:41:24.101786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.266 [2024-11-05 17:41:24.101823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.838 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:04.838 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:04.838 17:41:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72492 00:07:04.838 17:41:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72492 /var/tmp/spdk2.sock 00:07:04.838 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:04.838 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72492 /var/tmp/spdk2.sock 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72492 /var/tmp/spdk2.sock 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 72492 ']' 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:04.839 17:41:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.099 [2024-11-05 17:41:24.838845] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:05.100 [2024-11-05 17:41:24.839489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72492 ] 00:07:05.100 [2024-11-05 17:41:24.974654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:05.100 [2024-11-05 17:41:25.020592] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72474 has claimed it. 00:07:05.100 [2024-11-05 17:41:25.020656] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:05.667 ERROR: process (pid: 72492) is no longer running 00:07:05.667 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (72492) - No such process 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72474 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 72474 ']' 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 72474 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:05.667 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72474 00:07:05.668 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:05.668 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:05.668 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72474' 00:07:05.668 killing process with pid 72474 00:07:05.668 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 72474 00:07:05.668 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 72474 00:07:05.984 00:07:05.984 real 0m2.102s 00:07:05.984 user 0m5.543s 00:07:05.984 sys 0m0.617s 00:07:05.984 ************************************ 00:07:05.984 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.984 17:41:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.984 END TEST locking_overlapped_coremask 00:07:05.984 ************************************ 00:07:05.984 17:41:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:05.984 17:41:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:05.984 17:41:25 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.984 17:41:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.263 ************************************ 00:07:06.263 START TEST locking_overlapped_coremask_via_rpc 00:07:06.263 ************************************ 00:07:06.263 17:41:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:06.263 17:41:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72534 00:07:06.263 17:41:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72534 /var/tmp/spdk.sock 00:07:06.263 17:41:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72534 ']' 00:07:06.263 17:41:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.263 17:41:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:06.263 17:41:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:06.263 17:41:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.263 17:41:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:06.263 17:41:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.263 [2024-11-05 17:41:26.053103] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:06.263 [2024-11-05 17:41:26.053236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72534 ] 00:07:06.263 [2024-11-05 17:41:26.187589] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.263 [2024-11-05 17:41:26.210119] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.263 [2024-11-05 17:41:26.210154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.263 [2024-11-05 17:41:26.236778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.263 [2024-11-05 17:41:26.237124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.263 [2024-11-05 17:41:26.237134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.197 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:07.197 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:07.197 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:07.197 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72552 00:07:07.197 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72552 /var/tmp/spdk2.sock 00:07:07.197 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72552 ']' 00:07:07.197 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.197 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:07.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.197 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.197 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:07.197 17:41:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.197 [2024-11-05 17:41:26.964654] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:07.197 [2024-11-05 17:41:26.964772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72552 ] 00:07:07.197 [2024-11-05 17:41:27.099547] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.197 [2024-11-05 17:41:27.141046] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.197 [2024-11-05 17:41:27.141089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.197 [2024-11-05 17:41:27.186544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.197 [2024-11-05 17:41:27.186558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.197 [2024-11-05 17:41:27.186622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.133 [2024-11-05 17:41:27.823232] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72534 has claimed it. 00:07:08.133 request: 00:07:08.133 { 00:07:08.133 "method": "framework_enable_cpumask_locks", 00:07:08.133 "req_id": 1 00:07:08.133 } 00:07:08.133 Got JSON-RPC error response 00:07:08.133 response: 00:07:08.133 { 00:07:08.133 "code": -32603, 00:07:08.133 "message": "Failed to claim CPU core: 2" 00:07:08.133 } 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72534 /var/tmp/spdk.sock 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72534 ']' 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.133 17:41:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.133 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.133 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:08.133 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72552 /var/tmp/spdk2.sock 00:07:08.133 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72552 ']' 00:07:08.133 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.133 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.133 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.133 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.133 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.392 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.392 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:08.392 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:08.392 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.392 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.392 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.392 00:07:08.392 real 0m2.274s 00:07:08.392 user 0m1.071s 00:07:08.392 sys 0m0.138s 00:07:08.393 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.393 ************************************ 00:07:08.393 END TEST locking_overlapped_coremask_via_rpc 00:07:08.393 17:41:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.393 ************************************ 00:07:08.393 17:41:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:08.393 17:41:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72534 ]] 00:07:08.393 17:41:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72534 00:07:08.393 17:41:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72534 ']' 00:07:08.393 17:41:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72534 00:07:08.393 17:41:28 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:08.393 17:41:28 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:08.393 17:41:28 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72534 00:07:08.393 17:41:28 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:08.393 17:41:28 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:08.393 17:41:28 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72534' 00:07:08.393 killing process with pid 72534 00:07:08.393 17:41:28 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 72534 00:07:08.393 17:41:28 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 72534 00:07:08.651 17:41:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72552 ]] 00:07:08.651 17:41:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72552 00:07:08.651 17:41:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72552 ']' 00:07:08.651 17:41:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72552 00:07:08.651 17:41:28 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:08.651 17:41:28 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:08.651 17:41:28 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72552 00:07:08.651 17:41:28 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:08.651 17:41:28 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:08.651 killing process with pid 72552 00:07:08.651 17:41:28 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72552' 00:07:08.651 17:41:28 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 72552 00:07:08.651 17:41:28 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 72552 00:07:08.911 17:41:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.911 17:41:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:08.911 17:41:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72534 ]] 00:07:08.911 17:41:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72534 00:07:08.911 17:41:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72534 ']' 00:07:08.911 17:41:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72534 00:07:08.911 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72534) - No such process 00:07:08.911 17:41:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 72534 is not found' 00:07:08.911 Process with pid 72534 is not found 00:07:08.911 17:41:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72552 ]] 00:07:08.911 17:41:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72552 00:07:08.911 17:41:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72552 ']' 00:07:08.911 17:41:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72552 00:07:08.911 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72552) - No such process 00:07:08.911 Process with pid 72552 is not found 00:07:08.911 17:41:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 72552 is not found' 00:07:08.911 17:41:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:08.911 00:07:08.911 real 0m17.417s 00:07:08.911 user 0m29.346s 00:07:08.911 sys 0m4.922s 00:07:08.911 17:41:28 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.911 ************************************ 00:07:08.911 END TEST cpu_locks 00:07:08.911 ************************************ 00:07:08.911 17:41:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.911 ************************************ 00:07:08.911 END TEST event 00:07:08.911 ************************************ 00:07:08.911 00:07:08.911 real 0m44.231s 00:07:08.911 user 1m24.423s 00:07:08.911 sys 0m8.128s 00:07:08.911 17:41:28 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.911 17:41:28 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.173 17:41:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:09.173 17:41:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:09.173 17:41:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.173 17:41:28 -- common/autotest_common.sh@10 -- # set +x 00:07:09.173 ************************************ 00:07:09.173 START TEST thread 00:07:09.173 ************************************ 00:07:09.173 17:41:28 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:09.173 * Looking for test storage... 00:07:09.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:09.173 17:41:29 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.173 17:41:29 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.173 17:41:29 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.173 17:41:29 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.173 17:41:29 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.173 17:41:29 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.173 17:41:29 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.173 17:41:29 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.173 17:41:29 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.173 17:41:29 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.173 17:41:29 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.173 17:41:29 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:09.173 17:41:29 thread -- scripts/common.sh@345 -- # : 1 00:07:09.173 17:41:29 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.173 17:41:29 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.173 17:41:29 thread -- scripts/common.sh@365 -- # decimal 1 00:07:09.173 17:41:29 thread -- scripts/common.sh@353 -- # local d=1 00:07:09.173 17:41:29 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.173 17:41:29 thread -- scripts/common.sh@355 -- # echo 1 00:07:09.173 17:41:29 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.173 17:41:29 thread -- scripts/common.sh@366 -- # decimal 2 00:07:09.173 17:41:29 thread -- scripts/common.sh@353 -- # local d=2 00:07:09.173 17:41:29 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.173 17:41:29 thread -- scripts/common.sh@355 -- # echo 2 00:07:09.173 17:41:29 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.173 17:41:29 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.173 17:41:29 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.173 17:41:29 thread -- scripts/common.sh@368 -- # return 0 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:09.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.173 --rc genhtml_branch_coverage=1 00:07:09.173 --rc genhtml_function_coverage=1 00:07:09.173 --rc genhtml_legend=1 00:07:09.173 --rc geninfo_all_blocks=1 00:07:09.173 --rc geninfo_unexecuted_blocks=1 00:07:09.173 00:07:09.173 ' 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:09.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.173 --rc genhtml_branch_coverage=1 00:07:09.173 --rc genhtml_function_coverage=1 00:07:09.173 --rc genhtml_legend=1 00:07:09.173 --rc geninfo_all_blocks=1 00:07:09.173 --rc geninfo_unexecuted_blocks=1 00:07:09.173 00:07:09.173 ' 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:09.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.173 --rc genhtml_branch_coverage=1 00:07:09.173 --rc genhtml_function_coverage=1 00:07:09.173 --rc genhtml_legend=1 00:07:09.173 --rc geninfo_all_blocks=1 00:07:09.173 --rc geninfo_unexecuted_blocks=1 00:07:09.173 00:07:09.173 ' 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:09.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.173 --rc genhtml_branch_coverage=1 00:07:09.173 --rc genhtml_function_coverage=1 00:07:09.173 --rc genhtml_legend=1 00:07:09.173 --rc geninfo_all_blocks=1 00:07:09.173 --rc geninfo_unexecuted_blocks=1 00:07:09.173 00:07:09.173 ' 00:07:09.173 17:41:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.173 17:41:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.173 ************************************ 00:07:09.173 START TEST thread_poller_perf 00:07:09.173 ************************************ 00:07:09.173 17:41:29 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:09.173 [2024-11-05 17:41:29.114153] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:09.173 [2024-11-05 17:41:29.114381] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72679 ] 00:07:09.433 [2024-11-05 17:41:29.243134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:09.433 [2024-11-05 17:41:29.273781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.433 [2024-11-05 17:41:29.292896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.433 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:10.374 [2024-11-05T17:41:30.365Z] ====================================== 00:07:10.374 [2024-11-05T17:41:30.365Z] busy:2615600636 (cyc) 00:07:10.374 [2024-11-05T17:41:30.365Z] total_run_count: 305000 00:07:10.374 [2024-11-05T17:41:30.365Z] tsc_hz: 2600000000 (cyc) 00:07:10.374 [2024-11-05T17:41:30.365Z] ====================================== 00:07:10.374 [2024-11-05T17:41:30.365Z] poller_cost: 8575 (cyc), 3298 (nsec) 00:07:10.374 00:07:10.374 real 0m1.259s 00:07:10.374 user 0m1.083s 00:07:10.374 sys 0m0.069s 00:07:10.374 17:41:30 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:10.374 ************************************ 00:07:10.374 END TEST thread_poller_perf 00:07:10.374 ************************************ 00:07:10.374 17:41:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.632 17:41:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.632 17:41:30 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:10.632 17:41:30 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.632 17:41:30 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.632 ************************************ 00:07:10.632 START TEST thread_poller_perf 00:07:10.632 ************************************ 00:07:10.632 17:41:30 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:10.632 [2024-11-05 17:41:30.416905] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:10.632 [2024-11-05 17:41:30.417216] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72710 ] 00:07:10.632 [2024-11-05 17:41:30.544578] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.632 [2024-11-05 17:41:30.577236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.632 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:10.632 [2024-11-05 17:41:30.596098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.071 [2024-11-05T17:41:32.062Z] ====================================== 00:07:12.071 [2024-11-05T17:41:32.062Z] busy:2603403390 (cyc) 00:07:12.071 [2024-11-05T17:41:32.062Z] total_run_count: 3936000 00:07:12.071 [2024-11-05T17:41:32.062Z] tsc_hz: 2600000000 (cyc) 00:07:12.071 [2024-11-05T17:41:32.062Z] ====================================== 00:07:12.071 [2024-11-05T17:41:32.062Z] poller_cost: 661 (cyc), 254 (nsec) 00:07:12.071 00:07:12.071 real 0m1.270s 00:07:12.071 user 0m1.096s 00:07:12.071 sys 0m0.066s 00:07:12.071 17:41:31 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.071 ************************************ 00:07:12.071 END TEST thread_poller_perf 00:07:12.071 ************************************ 00:07:12.071 17:41:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.071 17:41:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:12.071 ************************************ 00:07:12.071 END TEST thread 00:07:12.071 ************************************ 00:07:12.071 00:07:12.071 real 0m2.778s 00:07:12.071 user 0m2.292s 00:07:12.071 sys 0m0.254s 00:07:12.071 17:41:31 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.071 17:41:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.071 17:41:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:12.071 17:41:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:12.071 17:41:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:12.071 17:41:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.071 17:41:31 -- common/autotest_common.sh@10 -- # set +x 00:07:12.071 ************************************ 00:07:12.071 START TEST app_cmdline 00:07:12.071 ************************************ 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:12.071 * Looking for test storage... 00:07:12.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.071 17:41:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:12.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.071 --rc genhtml_branch_coverage=1 00:07:12.071 --rc genhtml_function_coverage=1 00:07:12.071 --rc genhtml_legend=1 00:07:12.071 --rc geninfo_all_blocks=1 00:07:12.071 --rc geninfo_unexecuted_blocks=1 00:07:12.071 00:07:12.071 ' 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:12.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.071 --rc genhtml_branch_coverage=1 00:07:12.071 --rc genhtml_function_coverage=1 00:07:12.071 --rc genhtml_legend=1 00:07:12.071 --rc geninfo_all_blocks=1 00:07:12.071 --rc geninfo_unexecuted_blocks=1 00:07:12.071 00:07:12.071 ' 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:12.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.071 --rc genhtml_branch_coverage=1 00:07:12.071 --rc genhtml_function_coverage=1 00:07:12.071 --rc genhtml_legend=1 00:07:12.071 --rc geninfo_all_blocks=1 00:07:12.071 --rc geninfo_unexecuted_blocks=1 00:07:12.071 00:07:12.071 ' 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:12.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.071 --rc genhtml_branch_coverage=1 00:07:12.071 --rc genhtml_function_coverage=1 00:07:12.071 --rc genhtml_legend=1 00:07:12.071 --rc geninfo_all_blocks=1 00:07:12.071 --rc geninfo_unexecuted_blocks=1 00:07:12.071 00:07:12.071 ' 00:07:12.071 17:41:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:12.071 17:41:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72799 00:07:12.071 17:41:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:12.071 17:41:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72799 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 72799 ']' 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:12.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:12.071 17:41:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.071 [2024-11-05 17:41:32.013708] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:12.071 [2024-11-05 17:41:32.014089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72799 ] 00:07:12.333 [2024-11-05 17:41:32.147252] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.333 [2024-11-05 17:41:32.178204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.333 [2024-11-05 17:41:32.206851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.899 17:41:32 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.899 17:41:32 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:12.899 17:41:32 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:13.157 { 00:07:13.157 "version": "SPDK v25.01-pre git sha1 f220d590c", 00:07:13.157 "fields": { 00:07:13.157 "major": 25, 00:07:13.157 "minor": 1, 00:07:13.157 "patch": 0, 00:07:13.157 "suffix": "-pre", 00:07:13.157 "commit": "f220d590c" 00:07:13.157 } 00:07:13.157 } 00:07:13.157 17:41:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:13.157 17:41:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:13.157 17:41:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:13.157 17:41:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:13.157 17:41:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:13.157 17:41:33 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.157 17:41:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.157 17:41:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:13.157 17:41:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:13.157 17:41:33 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.157 17:41:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:13.157 17:41:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:13.157 17:41:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.157 17:41:33 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:13.158 17:41:33 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.158 17:41:33 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.158 17:41:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.158 17:41:33 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.158 17:41:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.158 17:41:33 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.158 17:41:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.158 17:41:33 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.158 17:41:33 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:13.158 17:41:33 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.416 request: 00:07:13.416 { 00:07:13.416 "method": "env_dpdk_get_mem_stats", 00:07:13.416 "req_id": 1 00:07:13.416 } 00:07:13.416 Got JSON-RPC error response 00:07:13.416 response: 00:07:13.416 { 00:07:13.416 "code": -32601, 00:07:13.416 "message": "Method not found" 00:07:13.416 } 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.416 17:41:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72799 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 72799 ']' 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 72799 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72799 00:07:13.416 killing process with pid 72799 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72799' 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@971 -- # kill 72799 00:07:13.416 17:41:33 app_cmdline -- common/autotest_common.sh@976 -- # wait 72799 00:07:13.674 ************************************ 00:07:13.674 END TEST app_cmdline 00:07:13.674 ************************************ 00:07:13.674 00:07:13.674 real 0m1.794s 00:07:13.674 user 0m2.101s 00:07:13.674 sys 0m0.461s 00:07:13.674 17:41:33 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.674 17:41:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.674 17:41:33 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:13.674 17:41:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:13.674 17:41:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.674 17:41:33 -- common/autotest_common.sh@10 -- # set +x 00:07:13.674 ************************************ 00:07:13.674 START TEST version 00:07:13.674 ************************************ 00:07:13.674 17:41:33 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:13.933 * Looking for test storage... 00:07:13.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:13.933 17:41:33 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.933 17:41:33 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.933 17:41:33 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.933 17:41:33 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.933 17:41:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.933 17:41:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.933 17:41:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.933 17:41:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.933 17:41:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.933 17:41:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.933 17:41:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.933 17:41:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.933 17:41:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.933 17:41:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.933 17:41:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.933 17:41:33 version -- scripts/common.sh@344 -- # case "$op" in 00:07:13.933 17:41:33 version -- scripts/common.sh@345 -- # : 1 00:07:13.933 17:41:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.933 17:41:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.933 17:41:33 version -- scripts/common.sh@365 -- # decimal 1 00:07:13.933 17:41:33 version -- scripts/common.sh@353 -- # local d=1 00:07:13.933 17:41:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.933 17:41:33 version -- scripts/common.sh@355 -- # echo 1 00:07:13.933 17:41:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.933 17:41:33 version -- scripts/common.sh@366 -- # decimal 2 00:07:13.933 17:41:33 version -- scripts/common.sh@353 -- # local d=2 00:07:13.933 17:41:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.933 17:41:33 version -- scripts/common.sh@355 -- # echo 2 00:07:13.933 17:41:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.933 17:41:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.933 17:41:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.933 17:41:33 version -- scripts/common.sh@368 -- # return 0 00:07:13.933 17:41:33 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.933 17:41:33 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.933 --rc genhtml_branch_coverage=1 00:07:13.933 --rc genhtml_function_coverage=1 00:07:13.933 --rc genhtml_legend=1 00:07:13.933 --rc geninfo_all_blocks=1 00:07:13.933 --rc geninfo_unexecuted_blocks=1 00:07:13.933 00:07:13.933 ' 00:07:13.933 17:41:33 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.933 --rc genhtml_branch_coverage=1 00:07:13.933 --rc genhtml_function_coverage=1 00:07:13.933 --rc genhtml_legend=1 00:07:13.933 --rc geninfo_all_blocks=1 00:07:13.933 --rc geninfo_unexecuted_blocks=1 00:07:13.933 00:07:13.933 ' 00:07:13.933 17:41:33 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.933 --rc genhtml_branch_coverage=1 00:07:13.933 --rc genhtml_function_coverage=1 00:07:13.933 --rc genhtml_legend=1 00:07:13.933 --rc geninfo_all_blocks=1 00:07:13.933 --rc geninfo_unexecuted_blocks=1 00:07:13.933 00:07:13.933 ' 00:07:13.933 17:41:33 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.933 --rc genhtml_branch_coverage=1 00:07:13.933 --rc genhtml_function_coverage=1 00:07:13.933 --rc genhtml_legend=1 00:07:13.933 --rc geninfo_all_blocks=1 00:07:13.933 --rc geninfo_unexecuted_blocks=1 00:07:13.933 00:07:13.933 ' 00:07:13.933 17:41:33 version -- app/version.sh@17 -- # get_header_version major 00:07:13.933 17:41:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:13.933 17:41:33 version -- app/version.sh@14 -- # cut -f2 00:07:13.933 17:41:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.933 17:41:33 version -- app/version.sh@17 -- # major=25 00:07:13.933 17:41:33 version -- app/version.sh@18 -- # get_header_version minor 00:07:13.933 17:41:33 version -- app/version.sh@14 -- # cut -f2 00:07:13.933 17:41:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:13.933 17:41:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.933 17:41:33 version -- app/version.sh@18 -- # minor=1 00:07:13.933 17:41:33 version -- app/version.sh@19 -- # get_header_version patch 00:07:13.933 17:41:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:13.933 17:41:33 version -- app/version.sh@14 -- # cut -f2 00:07:13.933 17:41:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.933 17:41:33 version -- app/version.sh@19 -- # patch=0 00:07:13.933 17:41:33 version -- app/version.sh@20 -- # get_header_version suffix 00:07:13.933 17:41:33 version -- app/version.sh@14 -- # cut -f2 00:07:13.933 17:41:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:13.933 17:41:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:13.933 17:41:33 version -- app/version.sh@20 -- # suffix=-pre 00:07:13.933 17:41:33 version -- app/version.sh@22 -- # version=25.1 00:07:13.933 17:41:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:13.933 17:41:33 version -- app/version.sh@28 -- # version=25.1rc0 00:07:13.933 17:41:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:13.933 17:41:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:13.933 17:41:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:13.933 17:41:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:13.933 ************************************ 00:07:13.933 END TEST version 00:07:13.933 ************************************ 00:07:13.933 00:07:13.933 real 0m0.203s 00:07:13.933 user 0m0.125s 00:07:13.933 sys 0m0.103s 00:07:13.933 17:41:33 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.933 17:41:33 version -- common/autotest_common.sh@10 -- # set +x 00:07:13.933 17:41:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:13.933 17:41:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:13.933 17:41:33 -- spdk/autotest.sh@194 -- # uname -s 00:07:13.934 17:41:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:13.934 17:41:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:13.934 17:41:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:13.934 17:41:33 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:13.934 17:41:33 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:13.934 17:41:33 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:13.934 17:41:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.934 17:41:33 -- common/autotest_common.sh@10 -- # set +x 00:07:13.934 ************************************ 00:07:13.934 START TEST blockdev_nvme 00:07:13.934 ************************************ 00:07:13.934 17:41:33 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:14.192 * Looking for test storage... 00:07:14.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:14.192 17:41:33 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:14.192 17:41:33 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:14.192 17:41:33 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:14.192 17:41:34 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:14.192 17:41:34 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.192 17:41:34 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.192 17:41:34 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.192 17:41:34 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.192 17:41:34 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.192 17:41:34 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.192 17:41:34 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.192 17:41:34 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.192 17:41:34 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.192 17:41:34 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.193 17:41:34 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:14.193 17:41:34 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.193 17:41:34 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:14.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.193 --rc genhtml_branch_coverage=1 00:07:14.193 --rc genhtml_function_coverage=1 00:07:14.193 --rc genhtml_legend=1 00:07:14.193 --rc geninfo_all_blocks=1 00:07:14.193 --rc geninfo_unexecuted_blocks=1 00:07:14.193 00:07:14.193 ' 00:07:14.193 17:41:34 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:14.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.193 --rc genhtml_branch_coverage=1 00:07:14.193 --rc genhtml_function_coverage=1 00:07:14.193 --rc genhtml_legend=1 00:07:14.193 --rc geninfo_all_blocks=1 00:07:14.193 --rc geninfo_unexecuted_blocks=1 00:07:14.193 00:07:14.193 ' 00:07:14.193 17:41:34 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:14.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.193 --rc genhtml_branch_coverage=1 00:07:14.193 --rc genhtml_function_coverage=1 00:07:14.193 --rc genhtml_legend=1 00:07:14.193 --rc geninfo_all_blocks=1 00:07:14.193 --rc geninfo_unexecuted_blocks=1 00:07:14.193 00:07:14.193 ' 00:07:14.193 17:41:34 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:14.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.193 --rc genhtml_branch_coverage=1 00:07:14.193 --rc genhtml_function_coverage=1 00:07:14.193 --rc genhtml_legend=1 00:07:14.193 --rc geninfo_all_blocks=1 00:07:14.193 --rc geninfo_unexecuted_blocks=1 00:07:14.193 00:07:14.193 ' 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:14.193 17:41:34 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:14.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72960 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72960 00:07:14.193 17:41:34 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 72960 ']' 00:07:14.193 17:41:34 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.193 17:41:34 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:14.193 17:41:34 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:14.193 17:41:34 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.193 17:41:34 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:14.193 17:41:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.193 [2024-11-05 17:41:34.106784] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:14.193 [2024-11-05 17:41:34.106903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72960 ] 00:07:14.451 [2024-11-05 17:41:34.234596] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.451 [2024-11-05 17:41:34.258469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.451 [2024-11-05 17:41:34.276475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.019 17:41:34 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.019 17:41:34 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:07:15.019 17:41:34 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:15.019 17:41:34 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:15.019 17:41:34 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:15.019 17:41:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:15.019 17:41:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:15.019 17:41:34 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:15.019 17:41:34 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.019 17:41:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.281 17:41:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.281 17:41:35 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:15.281 17:41:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.281 17:41:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.542 17:41:35 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:15.542 17:41:35 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.542 17:41:35 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.542 17:41:35 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.542 17:41:35 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:15.542 17:41:35 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:15.542 17:41:35 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.542 17:41:35 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.542 17:41:35 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:15.543 17:41:35 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "39ea1fd4-0201-404f-84f6-3ba2097af5ca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "39ea1fd4-0201-404f-84f6-3ba2097af5ca",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2168ddd2-fc6f-48eb-8d4b-aae7cc33acda"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2168ddd2-fc6f-48eb-8d4b-aae7cc33acda",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f144e141-5af3-47ae-a248-5c766c04aab7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f144e141-5af3-47ae-a248-5c766c04aab7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d2c08a67-99ea-4a2a-ab7d-21f818c6d067"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d2c08a67-99ea-4a2a-ab7d-21f818c6d067",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3c1e544d-cd02-404c-baf4-d34e93f07d2b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3c1e544d-cd02-404c-baf4-d34e93f07d2b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1ebc6d22-b33e-4152-ae64-2d3109c08e1d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1ebc6d22-b33e-4152-ae64-2d3109c08e1d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:15.543 17:41:35 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:15.543 17:41:35 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:15.543 17:41:35 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:15.543 17:41:35 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:15.543 17:41:35 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 72960 00:07:15.543 17:41:35 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 72960 ']' 00:07:15.543 17:41:35 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 72960 00:07:15.543 17:41:35 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:07:15.543 17:41:35 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:15.543 17:41:35 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72960 00:07:15.543 killing process with pid 72960 00:07:15.543 17:41:35 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:15.543 17:41:35 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:15.543 17:41:35 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72960' 00:07:15.543 17:41:35 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 72960 00:07:15.543 17:41:35 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 72960 00:07:15.801 17:41:35 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:15.801 17:41:35 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:15.801 17:41:35 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:07:15.801 17:41:35 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.801 17:41:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:15.801 ************************************ 00:07:15.801 START TEST bdev_hello_world 00:07:15.801 ************************************ 00:07:15.801 17:41:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:16.061 [2024-11-05 17:41:35.838769] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:16.061 [2024-11-05 17:41:35.838883] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73022 ] 00:07:16.061 [2024-11-05 17:41:35.967484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.061 [2024-11-05 17:41:35.996088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.061 [2024-11-05 17:41:36.017212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.634 [2024-11-05 17:41:36.395159] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:16.634 [2024-11-05 17:41:36.395224] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:16.634 [2024-11-05 17:41:36.395246] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:16.634 [2024-11-05 17:41:36.397656] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:16.634 [2024-11-05 17:41:36.398663] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:16.634 [2024-11-05 17:41:36.398702] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:16.634 [2024-11-05 17:41:36.399216] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:16.634 00:07:16.634 [2024-11-05 17:41:36.399247] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:16.634 00:07:16.634 real 0m0.824s 00:07:16.634 user 0m0.539s 00:07:16.634 sys 0m0.180s 00:07:16.634 ************************************ 00:07:16.634 END TEST bdev_hello_world 00:07:16.634 ************************************ 00:07:16.634 17:41:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:16.634 17:41:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:16.895 17:41:36 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:16.895 17:41:36 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:16.895 17:41:36 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:16.895 17:41:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:16.895 ************************************ 00:07:16.895 START TEST bdev_bounds 00:07:16.895 ************************************ 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73053 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:16.895 Process bdevio pid: 73053 00:07:16.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73053' 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73053 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 73053 ']' 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:16.895 17:41:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:16.895 [2024-11-05 17:41:36.751154] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:16.895 [2024-11-05 17:41:36.751325] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73053 ] 00:07:17.157 [2024-11-05 17:41:36.888779] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.157 [2024-11-05 17:41:36.915790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.157 [2024-11-05 17:41:36.941934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.157 [2024-11-05 17:41:36.942344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.157 [2024-11-05 17:41:36.942494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.730 17:41:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:17.730 17:41:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:07:17.730 17:41:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:17.730 I/O targets: 00:07:17.730 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:17.730 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:17.730 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.730 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.730 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.730 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:17.730 00:07:17.730 00:07:17.730 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.730 http://cunit.sourceforge.net/ 00:07:17.730 00:07:17.730 00:07:17.730 Suite: bdevio tests on: Nvme3n1 00:07:17.730 Test: blockdev write read block ...passed 00:07:17.730 Test: blockdev write zeroes read block ...passed 00:07:17.730 Test: blockdev write zeroes read no split ...passed 00:07:17.992 Test: blockdev write zeroes read split ...passed 00:07:17.992 Test: blockdev write zeroes read split partial ...passed 00:07:17.992 Test: blockdev reset ...[2024-11-05 17:41:37.735405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:17.992 [2024-11-05 17:41:37.739720] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:17.992 passed 00:07:17.992 Test: blockdev write read 8 blocks ...passed 00:07:17.992 Test: blockdev write read size > 128k ...passed 00:07:17.992 Test: blockdev write read invalid size ...passed 00:07:17.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.992 Test: blockdev write read max offset ...passed 00:07:17.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.992 Test: blockdev writev readv 8 blocks ...passed 00:07:17.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.992 Test: blockdev writev readv block ...passed 00:07:17.992 Test: blockdev writev readv size > 128k ...passed 00:07:17.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.992 Test: blockdev comparev and writev ...[2024-11-05 17:41:37.759116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfe06000 len:0x1000 00:07:17.992 [2024-11-05 17:41:37.759194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.992 passed 00:07:17.992 Test: blockdev nvme passthru rw ...passed 00:07:17.992 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:41:37.761844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.992 [2024-11-05 17:41:37.761891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.992 passed 00:07:17.992 Test: blockdev nvme admin passthru ...passed 00:07:17.992 Test: blockdev copy ...passed 00:07:17.992 Suite: bdevio tests on: Nvme2n3 00:07:17.992 Test: blockdev write read block ...passed 00:07:17.992 Test: blockdev write zeroes read block ...passed 00:07:17.992 Test: blockdev write zeroes read no split ...passed 00:07:17.992 Test: blockdev write zeroes read split ...passed 00:07:17.992 Test: blockdev write zeroes read split partial ...passed 00:07:17.992 Test: blockdev reset ...[2024-11-05 17:41:37.792794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.992 [2024-11-05 17:41:37.797107] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.992 passed 00:07:17.992 Test: blockdev write read 8 blocks ...passed 00:07:17.992 Test: blockdev write read size > 128k ...passed 00:07:17.992 Test: blockdev write read invalid size ...passed 00:07:17.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.992 Test: blockdev write read max offset ...passed 00:07:17.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.992 Test: blockdev writev readv 8 blocks ...passed 00:07:17.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.992 Test: blockdev writev readv block ...passed 00:07:17.992 Test: blockdev writev readv size > 128k ...passed 00:07:17.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.992 Test: blockdev comparev and writev ...[2024-11-05 17:41:37.815924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc005000 len:0x1000 00:07:17.992 [2024-11-05 17:41:37.815982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.992 passed 00:07:17.992 Test: blockdev nvme passthru rw ...passed 00:07:17.992 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:41:37.818304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.992 [2024-11-05 17:41:37.818350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.992 passed 00:07:17.992 Test: blockdev nvme admin passthru ...passed 00:07:17.992 Test: blockdev copy ...passed 00:07:17.992 Suite: bdevio tests on: Nvme2n2 00:07:17.992 Test: blockdev write read block ...passed 00:07:17.992 Test: blockdev write zeroes read block ...passed 00:07:17.992 Test: blockdev write zeroes read no split ...passed 00:07:17.992 Test: blockdev write zeroes read split ...passed 00:07:17.992 Test: blockdev write zeroes read split partial ...passed 00:07:17.993 Test: blockdev reset ...[2024-11-05 17:41:37.850548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.993 passed 00:07:17.993 Test: blockdev write read 8 blocks ...[2024-11-05 17:41:37.854576] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.993 passed 00:07:17.993 Test: blockdev write read size > 128k ...passed 00:07:17.993 Test: blockdev write read invalid size ...passed 00:07:17.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.993 Test: blockdev write read max offset ...passed 00:07:17.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.993 Test: blockdev writev readv 8 blocks ...passed 00:07:17.993 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.993 Test: blockdev writev readv block ...passed 00:07:17.993 Test: blockdev writev readv size > 128k ...passed 00:07:17.993 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.993 Test: blockdev comparev and writev ...[2024-11-05 17:41:37.872573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e6236000 len:0x1000 00:07:17.993 [2024-11-05 17:41:37.872631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.993 passed 00:07:17.993 Test: blockdev nvme passthru rw ...passed 00:07:17.993 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:41:37.875444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.993 [2024-11-05 17:41:37.875493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.993 passed 00:07:17.993 Test: blockdev nvme admin passthru ...passed 00:07:17.993 Test: blockdev copy ...passed 00:07:17.993 Suite: bdevio tests on: Nvme2n1 00:07:17.993 Test: blockdev write read block ...passed 00:07:17.993 Test: blockdev write zeroes read block ...passed 00:07:17.993 Test: blockdev write zeroes read no split ...passed 00:07:17.993 Test: blockdev write zeroes read split ...passed 00:07:17.993 Test: blockdev write zeroes read split partial ...passed 00:07:17.993 Test: blockdev reset ...[2024-11-05 17:41:37.906801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.993 [2024-11-05 17:41:37.910726] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.993 passed 00:07:17.993 Test: blockdev write read 8 blocks ...passed 00:07:17.993 Test: blockdev write read size > 128k ...passed 00:07:17.993 Test: blockdev write read invalid size ...passed 00:07:17.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.993 Test: blockdev write read max offset ...passed 00:07:17.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.993 Test: blockdev writev readv 8 blocks ...passed 00:07:17.993 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.993 Test: blockdev writev readv block ...passed 00:07:17.993 Test: blockdev writev readv size > 128k ...passed 00:07:17.993 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.993 Test: blockdev comparev and writev ...[2024-11-05 17:41:37.928923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e6230000 len:0x1000 00:07:17.993 [2024-11-05 17:41:37.928984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.993 passed 00:07:17.993 Test: blockdev nvme passthru rw ...passed 00:07:17.993 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:41:37.931590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.993 [2024-11-05 17:41:37.931641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.993 passed 00:07:17.993 Test: blockdev nvme admin passthru ...passed 00:07:17.993 Test: blockdev copy ...passed 00:07:17.993 Suite: bdevio tests on: Nvme1n1 00:07:17.993 Test: blockdev write read block ...passed 00:07:17.993 Test: blockdev write zeroes read block ...passed 00:07:17.993 Test: blockdev write zeroes read no split ...passed 00:07:17.993 Test: blockdev write zeroes read split ...passed 00:07:17.993 Test: blockdev write zeroes read split partial ...passed 00:07:17.993 Test: blockdev reset ...[2024-11-05 17:41:37.963332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:17.993 [2024-11-05 17:41:37.966575] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:17.993 passed 00:07:17.993 Test: blockdev write read 8 blocks ...passed 00:07:17.993 Test: blockdev write read size > 128k ...passed 00:07:17.993 Test: blockdev write read invalid size ...passed 00:07:17.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.993 Test: blockdev write read max offset ...passed 00:07:17.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.993 Test: blockdev writev readv 8 blocks ...passed 00:07:17.993 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.993 Test: blockdev writev readv block ...passed 00:07:17.993 Test: blockdev writev readv size > 128k ...passed 00:07:17.993 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:18.254 Test: blockdev comparev and writev ...[2024-11-05 17:41:37.984927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e622c000 len:0x1000 00:07:18.254 [2024-11-05 17:41:37.984982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:18.254 passed 00:07:18.254 Test: blockdev nvme passthru rw ...passed 00:07:18.254 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:41:37.987503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:18.254 [2024-11-05 17:41:37.987552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:18.254 passed 00:07:18.254 Test: blockdev nvme admin passthru ...passed 00:07:18.254 Test: blockdev copy ...passed 00:07:18.254 Suite: bdevio tests on: Nvme0n1 00:07:18.254 Test: blockdev write read block ...passed 00:07:18.254 Test: blockdev write zeroes read block ...passed 00:07:18.254 Test: blockdev write zeroes read no split ...passed 00:07:18.254 Test: blockdev write zeroes read split ...passed 00:07:18.254 Test: blockdev write zeroes read split partial ...passed 00:07:18.254 Test: blockdev reset ...[2024-11-05 17:41:38.021383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:18.254 [2024-11-05 17:41:38.023458] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:18.254 passed 00:07:18.254 Test: blockdev write read 8 blocks ...passed 00:07:18.254 Test: blockdev write read size > 128k ...passed 00:07:18.254 Test: blockdev write read invalid size ...passed 00:07:18.254 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:18.254 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:18.254 Test: blockdev write read max offset ...passed 00:07:18.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:18.254 Test: blockdev writev readv 8 blocks ...passed 00:07:18.254 Test: blockdev writev readv 30 x 1block ...passed 00:07:18.254 Test: blockdev writev readv block ...passed 00:07:18.254 Test: blockdev writev readv size > 128k ...passed 00:07:18.254 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:18.254 Test: blockdev comparev and writev ...passed 00:07:18.254 Test: blockdev nvme passthru rw ...[2024-11-05 17:41:38.038872] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:18.254 separate metadata which is not supported yet. 00:07:18.254 passed 00:07:18.254 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:41:38.040965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:18.254 [2024-11-05 17:41:38.041026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:18.254 passed 00:07:18.254 Test: blockdev nvme admin passthru ...passed 00:07:18.254 Test: blockdev copy ...passed 00:07:18.254 00:07:18.254 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.254 suites 6 6 n/a 0 0 00:07:18.254 tests 138 138 138 0 0 00:07:18.254 asserts 893 893 893 0 n/a 00:07:18.254 00:07:18.254 Elapsed time = 0.745 seconds 00:07:18.254 0 00:07:18.254 17:41:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73053 00:07:18.254 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 73053 ']' 00:07:18.254 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 73053 00:07:18.254 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:07:18.255 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:18.255 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73053 00:07:18.255 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:18.255 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:18.255 killing process with pid 73053 00:07:18.255 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73053' 00:07:18.255 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 73053 00:07:18.255 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 73053 00:07:18.517 17:41:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:18.517 00:07:18.517 real 0m1.616s 00:07:18.517 user 0m3.940s 00:07:18.517 sys 0m0.373s 00:07:18.517 ************************************ 00:07:18.517 END TEST bdev_bounds 00:07:18.517 ************************************ 00:07:18.517 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.517 17:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:18.517 17:41:38 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:18.517 17:41:38 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:18.517 17:41:38 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.517 17:41:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:18.517 ************************************ 00:07:18.517 START TEST bdev_nbd 00:07:18.517 ************************************ 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73107 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73107 /var/tmp/spdk-nbd.sock 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 73107 ']' 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:18.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:18.517 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:18.517 [2024-11-05 17:41:38.445778] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:18.517 [2024-11-05 17:41:38.445944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.779 [2024-11-05 17:41:38.581990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.779 [2024-11-05 17:41:38.605477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.779 [2024-11-05 17:41:38.647200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:19.352 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.612 1+0 records in 00:07:19.612 1+0 records out 00:07:19.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763818 s, 5.4 MB/s 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:19.612 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.613 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:19.613 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:19.613 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.613 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:19.873 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.873 1+0 records in 00:07:19.873 1+0 records out 00:07:19.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000956711 s, 4.3 MB/s 00:07:20.132 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.132 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:20.132 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.132 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:20.132 17:41:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:20.132 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.132 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:20.132 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:20.132 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.132 1+0 records in 00:07:20.132 1+0 records out 00:07:20.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403462 s, 10.2 MB/s 00:07:20.392 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.392 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:20.392 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.392 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:20.392 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:20.392 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.393 1+0 records in 00:07:20.393 1+0 records out 00:07:20.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111237 s, 3.7 MB/s 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:20.393 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.655 1+0 records in 00:07:20.655 1+0 records out 00:07:20.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101961 s, 4.0 MB/s 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:20.655 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.917 1+0 records in 00:07:20.917 1+0 records out 00:07:20.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000926458 s, 4.4 MB/s 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:20.917 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.178 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd0", 00:07:21.178 "bdev_name": "Nvme0n1" 00:07:21.178 }, 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd1", 00:07:21.178 "bdev_name": "Nvme1n1" 00:07:21.178 }, 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd2", 00:07:21.178 "bdev_name": "Nvme2n1" 00:07:21.178 }, 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd3", 00:07:21.178 "bdev_name": "Nvme2n2" 00:07:21.178 }, 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd4", 00:07:21.178 "bdev_name": "Nvme2n3" 00:07:21.178 }, 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd5", 00:07:21.178 "bdev_name": "Nvme3n1" 00:07:21.178 } 00:07:21.178 ]' 00:07:21.178 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:21.178 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd0", 00:07:21.178 "bdev_name": "Nvme0n1" 00:07:21.178 }, 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd1", 00:07:21.178 "bdev_name": "Nvme1n1" 00:07:21.178 }, 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd2", 00:07:21.178 "bdev_name": "Nvme2n1" 00:07:21.178 }, 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd3", 00:07:21.178 "bdev_name": "Nvme2n2" 00:07:21.178 }, 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd4", 00:07:21.178 "bdev_name": "Nvme2n3" 00:07:21.178 }, 00:07:21.178 { 00:07:21.178 "nbd_device": "/dev/nbd5", 00:07:21.178 "bdev_name": "Nvme3n1" 00:07:21.178 } 00:07:21.178 ]' 00:07:21.178 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:21.178 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:21.178 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.178 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:21.178 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.178 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:21.178 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.178 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.439 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.439 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.439 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.439 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.439 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.439 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.439 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.439 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.439 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.439 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:21.701 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:21.701 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:21.701 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:21.701 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.701 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.701 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:21.701 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.701 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.701 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.701 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:21.963 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:21.963 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:21.963 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:21.963 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.963 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.963 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:21.963 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.963 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.963 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.963 17:41:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:22.227 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:22.227 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:22.227 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:22.227 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.227 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.227 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:22.227 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.227 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.227 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.227 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:22.517 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:22.517 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:22.517 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:22.517 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.517 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.517 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:22.517 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.517 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.517 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.517 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:22.794 17:41:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:23.056 /dev/nbd0 00:07:23.056 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.056 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.056 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:23.056 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:23.056 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:23.056 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:23.056 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:23.056 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:23.056 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:23.057 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:23.057 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.057 1+0 records in 00:07:23.057 1+0 records out 00:07:23.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927162 s, 4.4 MB/s 00:07:23.057 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.057 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:23.057 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.057 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:23.057 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:23.057 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.057 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:23.057 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:23.319 /dev/nbd1 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.319 1+0 records in 00:07:23.319 1+0 records out 00:07:23.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116493 s, 3.5 MB/s 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:23.319 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:23.580 /dev/nbd10 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.580 1+0 records in 00:07:23.580 1+0 records out 00:07:23.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131622 s, 3.1 MB/s 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:23.580 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:23.840 /dev/nbd11 00:07:23.840 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:23.840 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:23.840 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:07:23.840 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:23.840 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:23.840 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:23.840 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:07:23.840 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:23.840 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:23.840 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:23.840 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.840 1+0 records in 00:07:23.840 1+0 records out 00:07:23.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100698 s, 4.1 MB/s 00:07:23.841 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.841 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:23.841 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.841 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:23.841 17:41:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:23.841 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.841 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:23.841 17:41:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:24.102 /dev/nbd12 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.102 1+0 records in 00:07:24.102 1+0 records out 00:07:24.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123214 s, 3.3 MB/s 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:24.102 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:24.364 /dev/nbd13 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.364 1+0 records in 00:07:24.364 1+0 records out 00:07:24.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114452 s, 3.6 MB/s 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.364 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd0", 00:07:24.625 "bdev_name": "Nvme0n1" 00:07:24.625 }, 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd1", 00:07:24.625 "bdev_name": "Nvme1n1" 00:07:24.625 }, 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd10", 00:07:24.625 "bdev_name": "Nvme2n1" 00:07:24.625 }, 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd11", 00:07:24.625 "bdev_name": "Nvme2n2" 00:07:24.625 }, 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd12", 00:07:24.625 "bdev_name": "Nvme2n3" 00:07:24.625 }, 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd13", 00:07:24.625 "bdev_name": "Nvme3n1" 00:07:24.625 } 00:07:24.625 ]' 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd0", 00:07:24.625 "bdev_name": "Nvme0n1" 00:07:24.625 }, 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd1", 00:07:24.625 "bdev_name": "Nvme1n1" 00:07:24.625 }, 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd10", 00:07:24.625 "bdev_name": "Nvme2n1" 00:07:24.625 }, 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd11", 00:07:24.625 "bdev_name": "Nvme2n2" 00:07:24.625 }, 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd12", 00:07:24.625 "bdev_name": "Nvme2n3" 00:07:24.625 }, 00:07:24.625 { 00:07:24.625 "nbd_device": "/dev/nbd13", 00:07:24.625 "bdev_name": "Nvme3n1" 00:07:24.625 } 00:07:24.625 ]' 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:24.625 /dev/nbd1 00:07:24.625 /dev/nbd10 00:07:24.625 /dev/nbd11 00:07:24.625 /dev/nbd12 00:07:24.625 /dev/nbd13' 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:24.625 /dev/nbd1 00:07:24.625 /dev/nbd10 00:07:24.625 /dev/nbd11 00:07:24.625 /dev/nbd12 00:07:24.625 /dev/nbd13' 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:24.625 256+0 records in 00:07:24.625 256+0 records out 00:07:24.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00840301 s, 125 MB/s 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.625 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:24.887 256+0 records in 00:07:24.887 256+0 records out 00:07:24.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.264548 s, 4.0 MB/s 00:07:24.887 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.887 17:41:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:25.147 256+0 records in 00:07:25.147 256+0 records out 00:07:25.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243114 s, 4.3 MB/s 00:07:25.147 17:41:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.147 17:41:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:25.408 256+0 records in 00:07:25.408 256+0 records out 00:07:25.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228202 s, 4.6 MB/s 00:07:25.408 17:41:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.408 17:41:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:25.670 256+0 records in 00:07:25.670 256+0 records out 00:07:25.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.276168 s, 3.8 MB/s 00:07:25.670 17:41:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.670 17:41:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:25.936 256+0 records in 00:07:25.936 256+0 records out 00:07:25.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.27025 s, 3.9 MB/s 00:07:25.936 17:41:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.936 17:41:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:26.197 256+0 records in 00:07:26.197 256+0 records out 00:07:26.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.27463 s, 3.8 MB/s 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.197 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.459 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:26.720 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:26.720 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:26.720 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:26.721 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.721 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.721 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:26.721 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.721 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.721 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.721 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:26.982 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:26.982 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:26.982 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:26.982 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.982 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.982 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:26.982 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.982 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.982 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.982 17:41:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:27.244 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:27.244 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:27.244 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:27.244 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.244 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.244 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:27.244 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.244 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.244 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.244 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:27.506 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:27.506 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:27.506 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:27.506 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.506 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.506 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:27.506 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.506 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.506 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.506 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.767 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:28.028 17:41:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:28.028 malloc_lvol_verify 00:07:28.028 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:28.314 60ca6d29-158e-4cc1-b013-dc29fa57a7e7 00:07:28.314 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:28.575 4bf1f261-1e68-4038-9889-a9c31d9c411a 00:07:28.575 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:28.836 /dev/nbd0 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:28.836 mke2fs 1.47.0 (5-Feb-2023) 00:07:28.836 Discarding device blocks: 0/4096 done 00:07:28.836 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:28.836 00:07:28.836 Allocating group tables: 0/1 done 00:07:28.836 Writing inode tables: 0/1 done 00:07:28.836 Creating journal (1024 blocks): done 00:07:28.836 Writing superblocks and filesystem accounting information: 0/1 done 00:07:28.836 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.836 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73107 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 73107 ']' 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 73107 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73107 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:29.097 killing process with pid 73107 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73107' 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 73107 00:07:29.097 17:41:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 73107 00:07:29.097 17:41:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:29.097 00:07:29.097 real 0m10.687s 00:07:29.097 user 0m14.571s 00:07:29.097 sys 0m3.797s 00:07:29.097 17:41:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:29.097 ************************************ 00:07:29.097 END TEST bdev_nbd 00:07:29.097 ************************************ 00:07:29.097 17:41:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:29.357 skipping fio tests on NVMe due to multi-ns failures. 00:07:29.357 17:41:49 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:29.357 17:41:49 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:29.357 17:41:49 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:29.357 17:41:49 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:29.357 17:41:49 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:29.357 17:41:49 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:29.357 17:41:49 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:29.357 17:41:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.357 ************************************ 00:07:29.357 START TEST bdev_verify 00:07:29.357 ************************************ 00:07:29.357 17:41:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:29.357 [2024-11-05 17:41:49.177485] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:29.357 [2024-11-05 17:41:49.177592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73486 ] 00:07:29.357 [2024-11-05 17:41:49.307002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.357 [2024-11-05 17:41:49.329983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.357 [2024-11-05 17:41:49.350041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.357 [2024-11-05 17:41:49.350150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.928 Running I/O for 5 seconds... 00:07:32.256 21440.00 IOPS, 83.75 MiB/s [2024-11-05T17:41:53.191Z] 20736.00 IOPS, 81.00 MiB/s [2024-11-05T17:41:54.134Z] 21354.67 IOPS, 83.42 MiB/s [2024-11-05T17:41:55.076Z] 21440.00 IOPS, 83.75 MiB/s [2024-11-05T17:41:55.076Z] 20940.80 IOPS, 81.80 MiB/s 00:07:35.085 Latency(us) 00:07:35.085 [2024-11-05T17:41:55.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.085 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0x0 length 0xbd0bd 00:07:35.085 Nvme0n1 : 5.07 1728.73 6.75 0.00 0.00 73683.87 10233.70 74206.92 00:07:35.085 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:35.085 Nvme0n1 : 5.08 1713.19 6.69 0.00 0.00 74078.68 12250.19 63317.86 00:07:35.085 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0x0 length 0xa0000 00:07:35.085 Nvme1n1 : 5.08 1727.17 6.75 0.00 0.00 73589.27 13611.32 66140.95 00:07:35.085 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0xa0000 length 0xa0000 00:07:35.085 Nvme1n1 : 5.08 1712.11 6.69 0.00 0.00 73923.35 14317.10 62511.26 00:07:35.085 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0x0 length 0x80000 00:07:35.085 Nvme2n1 : 5.09 1735.96 6.78 0.00 0.00 73296.45 8570.09 64527.75 00:07:35.085 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0x80000 length 0x80000 00:07:35.085 Nvme2n1 : 5.09 1711.65 6.69 0.00 0.00 73795.89 13611.32 64931.05 00:07:35.085 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0x0 length 0x80000 00:07:35.085 Nvme2n2 : 5.09 1735.51 6.78 0.00 0.00 73209.61 8872.57 67754.14 00:07:35.085 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0x80000 length 0x80000 00:07:35.085 Nvme2n2 : 5.09 1711.19 6.68 0.00 0.00 73732.36 13006.38 67754.14 00:07:35.085 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0x0 length 0x80000 00:07:35.085 Nvme2n3 : 5.09 1735.03 6.78 0.00 0.00 73107.54 9074.22 70173.93 00:07:35.085 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0x80000 length 0x80000 00:07:35.085 Nvme2n3 : 5.08 1714.28 6.70 0.00 0.00 74453.44 15224.52 82272.89 00:07:35.085 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0x0 length 0x20000 00:07:35.085 Nvme3n1 : 5.09 1734.55 6.78 0.00 0.00 72976.74 9376.69 71383.83 00:07:35.085 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.085 Verification LBA range: start 0x20000 length 0x20000 00:07:35.085 Nvme3n1 : 5.08 1713.72 6.69 0.00 0.00 74245.24 14619.57 65334.35 00:07:35.085 [2024-11-05T17:41:55.076Z] =================================================================================================================== 00:07:35.085 [2024-11-05T17:41:55.076Z] Total : 20673.10 80.75 0.00 0.00 73671.76 8570.09 82272.89 00:07:35.347 00:07:35.347 real 0m6.060s 00:07:35.347 user 0m11.433s 00:07:35.347 sys 0m0.201s 00:07:35.347 17:41:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.347 ************************************ 00:07:35.347 END TEST bdev_verify 00:07:35.347 ************************************ 00:07:35.347 17:41:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:35.347 17:41:55 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:35.347 17:41:55 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:35.347 17:41:55 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.347 17:41:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.347 ************************************ 00:07:35.347 START TEST bdev_verify_big_io 00:07:35.347 ************************************ 00:07:35.347 17:41:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:35.347 [2024-11-05 17:41:55.307397] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:35.347 [2024-11-05 17:41:55.307511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73575 ] 00:07:35.608 [2024-11-05 17:41:55.436482] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.608 [2024-11-05 17:41:55.465852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:35.608 [2024-11-05 17:41:55.486230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.608 [2024-11-05 17:41:55.486346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.180 Running I/O for 5 seconds... 00:07:40.868 1187.00 IOPS, 74.19 MiB/s [2024-11-05T17:42:02.243Z] 1910.00 IOPS, 119.38 MiB/s [2024-11-05T17:42:02.243Z] 2479.33 IOPS, 154.96 MiB/s 00:07:42.252 Latency(us) 00:07:42.252 [2024-11-05T17:42:02.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.252 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0x0 length 0xbd0b 00:07:42.252 Nvme0n1 : 5.75 122.46 7.65 0.00 0.00 1000308.65 17745.13 1084066.26 00:07:42.252 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:42.252 Nvme0n1 : 5.98 125.75 7.86 0.00 0.00 854828.03 8973.39 1432516.14 00:07:42.252 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0x0 length 0xa000 00:07:42.252 Nvme1n1 : 5.87 126.76 7.92 0.00 0.00 943696.45 51622.20 890483.00 00:07:42.252 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0xa000 length 0xa000 00:07:42.252 Nvme1n1 : 6.07 173.39 10.84 0.00 0.00 603714.31 124.46 1471232.79 00:07:42.252 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0x0 length 0x8000 00:07:42.252 Nvme2n1 : 5.87 126.91 7.93 0.00 0.00 911242.18 50815.61 864671.90 00:07:42.252 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0x8000 length 0x8000 00:07:42.252 Nvme2n1 : 5.80 108.76 6.80 0.00 0.00 1133313.74 34280.37 1161499.57 00:07:42.252 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0x0 length 0x8000 00:07:42.252 Nvme2n2 : 5.87 130.75 8.17 0.00 0.00 863153.23 62914.56 896935.78 00:07:42.252 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0x8000 length 0x8000 00:07:42.252 Nvme2n2 : 5.81 101.96 6.37 0.00 0.00 1167745.93 98001.53 1871304.86 00:07:42.252 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0x0 length 0x8000 00:07:42.252 Nvme2n3 : 5.94 133.43 8.34 0.00 0.00 816763.66 58074.98 916294.10 00:07:42.252 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0x8000 length 0x8000 00:07:42.252 Nvme2n3 : 5.94 106.05 6.63 0.00 0.00 1081242.86 67754.14 1922927.06 00:07:42.252 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0x0 length 0x2000 00:07:42.252 Nvme3n1 : 5.97 150.15 9.38 0.00 0.00 710132.73 3327.21 980821.86 00:07:42.252 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:42.252 Verification LBA range: start 0x2000 length 0x2000 00:07:42.252 Nvme3n1 : 5.97 111.44 6.96 0.00 0.00 996936.51 63317.86 1961643.72 00:07:42.252 [2024-11-05T17:42:02.243Z] =================================================================================================================== 00:07:42.252 [2024-11-05T17:42:02.243Z] Total : 1517.82 94.86 0.00 0.00 898425.01 124.46 1961643.72 00:07:42.824 00:07:42.824 real 0m7.532s 00:07:42.824 user 0m14.320s 00:07:42.824 sys 0m0.230s 00:07:42.824 17:42:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.824 17:42:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:42.824 ************************************ 00:07:42.824 END TEST bdev_verify_big_io 00:07:42.824 ************************************ 00:07:43.085 17:42:02 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.085 17:42:02 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:43.085 17:42:02 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.085 17:42:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.085 ************************************ 00:07:43.085 START TEST bdev_write_zeroes 00:07:43.085 ************************************ 00:07:43.085 17:42:02 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.085 [2024-11-05 17:42:02.920096] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:43.085 [2024-11-05 17:42:02.920237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73681 ] 00:07:43.085 [2024-11-05 17:42:03.053006] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:43.346 [2024-11-05 17:42:03.085538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.346 [2024-11-05 17:42:03.116270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.621 Running I/O for 1 seconds... 00:07:45.006 51008.00 IOPS, 199.25 MiB/s 00:07:45.006 Latency(us) 00:07:45.006 [2024-11-05T17:42:04.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.006 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.006 Nvme0n1 : 1.02 8516.79 33.27 0.00 0.00 14991.96 5066.44 31457.28 00:07:45.006 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.006 Nvme1n1 : 1.02 8506.86 33.23 0.00 0.00 14994.21 9830.40 25306.98 00:07:45.006 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.006 Nvme2n1 : 1.02 8497.22 33.19 0.00 0.00 14939.82 9981.64 24500.38 00:07:45.006 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.006 Nvme2n2 : 1.03 8487.48 33.15 0.00 0.00 14934.37 9931.22 23189.66 00:07:45.006 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.006 Nvme2n3 : 1.03 8477.75 33.12 0.00 0.00 14909.55 9679.16 23189.66 00:07:45.006 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:45.006 Nvme3n1 : 1.03 8405.81 32.84 0.00 0.00 14972.11 9376.69 24903.68 00:07:45.006 [2024-11-05T17:42:04.997Z] =================================================================================================================== 00:07:45.006 [2024-11-05T17:42:04.997Z] Total : 50891.93 198.80 0.00 0.00 14956.99 5066.44 31457.28 00:07:45.006 00:07:45.006 real 0m1.947s 00:07:45.006 user 0m1.578s 00:07:45.006 sys 0m0.251s 00:07:45.006 17:42:04 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.006 ************************************ 00:07:45.006 END TEST bdev_write_zeroes 00:07:45.006 ************************************ 00:07:45.006 17:42:04 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:45.006 17:42:04 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.006 17:42:04 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:45.006 17:42:04 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.006 17:42:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.006 ************************************ 00:07:45.006 START TEST bdev_json_nonenclosed 00:07:45.006 ************************************ 00:07:45.006 17:42:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.006 [2024-11-05 17:42:04.945514] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:45.006 [2024-11-05 17:42:04.945656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73724 ] 00:07:45.267 [2024-11-05 17:42:05.079655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.267 [2024-11-05 17:42:05.110228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.267 [2024-11-05 17:42:05.139430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.267 [2024-11-05 17:42:05.139536] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:45.267 [2024-11-05 17:42:05.139554] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:45.268 [2024-11-05 17:42:05.139564] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.268 00:07:45.268 real 0m0.348s 00:07:45.268 user 0m0.133s 00:07:45.268 sys 0m0.111s 00:07:45.268 17:42:05 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.268 ************************************ 00:07:45.268 END TEST bdev_json_nonenclosed 00:07:45.268 ************************************ 00:07:45.268 17:42:05 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:45.529 17:42:05 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.529 17:42:05 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:45.529 17:42:05 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.529 17:42:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.529 ************************************ 00:07:45.529 START TEST bdev_json_nonarray 00:07:45.529 ************************************ 00:07:45.529 17:42:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.529 [2024-11-05 17:42:05.369672] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:45.529 [2024-11-05 17:42:05.369836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73744 ] 00:07:45.529 [2024-11-05 17:42:05.506690] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.789 [2024-11-05 17:42:05.533245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.789 [2024-11-05 17:42:05.562397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.789 [2024-11-05 17:42:05.562516] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:45.789 [2024-11-05 17:42:05.562535] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:45.790 [2024-11-05 17:42:05.562547] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.790 00:07:45.790 real 0m0.355s 00:07:45.790 user 0m0.128s 00:07:45.790 sys 0m0.122s 00:07:45.790 17:42:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.790 17:42:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:45.790 ************************************ 00:07:45.790 END TEST bdev_json_nonarray 00:07:45.790 ************************************ 00:07:45.790 17:42:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:45.790 17:42:05 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:45.790 17:42:05 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:45.790 17:42:05 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:45.790 17:42:05 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:45.790 17:42:05 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:45.790 17:42:05 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.790 17:42:05 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:45.790 17:42:05 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:45.790 17:42:05 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:45.790 17:42:05 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:45.790 00:07:45.790 real 0m31.824s 00:07:45.790 user 0m48.669s 00:07:45.790 sys 0m6.010s 00:07:45.790 17:42:05 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.790 17:42:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.790 ************************************ 00:07:45.790 END TEST blockdev_nvme 00:07:45.790 ************************************ 00:07:45.790 17:42:05 -- spdk/autotest.sh@209 -- # uname -s 00:07:45.790 17:42:05 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:45.790 17:42:05 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:45.790 17:42:05 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:45.790 17:42:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.790 17:42:05 -- common/autotest_common.sh@10 -- # set +x 00:07:45.790 ************************************ 00:07:45.790 START TEST blockdev_nvme_gpt 00:07:45.790 ************************************ 00:07:45.790 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:46.051 * Looking for test storage... 00:07:46.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:46.051 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:46.051 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:46.051 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:07:46.051 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.051 17:42:05 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:46.051 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.051 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:46.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.051 --rc genhtml_branch_coverage=1 00:07:46.051 --rc genhtml_function_coverage=1 00:07:46.051 --rc genhtml_legend=1 00:07:46.051 --rc geninfo_all_blocks=1 00:07:46.051 --rc geninfo_unexecuted_blocks=1 00:07:46.051 00:07:46.051 ' 00:07:46.051 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:46.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.051 --rc genhtml_branch_coverage=1 00:07:46.051 --rc genhtml_function_coverage=1 00:07:46.051 --rc genhtml_legend=1 00:07:46.051 --rc geninfo_all_blocks=1 00:07:46.051 --rc geninfo_unexecuted_blocks=1 00:07:46.051 00:07:46.051 ' 00:07:46.051 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:46.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.051 --rc genhtml_branch_coverage=1 00:07:46.051 --rc genhtml_function_coverage=1 00:07:46.051 --rc genhtml_legend=1 00:07:46.051 --rc geninfo_all_blocks=1 00:07:46.051 --rc geninfo_unexecuted_blocks=1 00:07:46.051 00:07:46.051 ' 00:07:46.051 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:46.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.051 --rc genhtml_branch_coverage=1 00:07:46.051 --rc genhtml_function_coverage=1 00:07:46.051 --rc genhtml_legend=1 00:07:46.051 --rc geninfo_all_blocks=1 00:07:46.051 --rc geninfo_unexecuted_blocks=1 00:07:46.051 00:07:46.051 ' 00:07:46.051 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73828 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73828 00:07:46.052 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 73828 ']' 00:07:46.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.052 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.052 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:46.052 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.052 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:46.052 17:42:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.052 17:42:05 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:46.312 [2024-11-05 17:42:06.058012] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:46.312 [2024-11-05 17:42:06.058258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73828 ] 00:07:46.312 [2024-11-05 17:42:06.207309] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:46.312 [2024-11-05 17:42:06.237473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.312 [2024-11-05 17:42:06.267505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.884 17:42:06 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:46.884 17:42:06 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:07:46.884 17:42:06 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:46.884 17:42:06 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:46.884 17:42:06 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:47.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:47.459 Waiting for block devices as requested 00:07:47.459 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.719 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.719 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.719 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:53.028 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:53.028 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:53.028 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:53.028 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:53.029 BYT; 00:07:53.029 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:53.029 BYT; 00:07:53.029 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:53.029 17:42:12 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:53.029 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:53.963 The operation has completed successfully. 00:07:53.963 17:42:13 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:55.335 The operation has completed successfully. 00:07:55.335 17:42:14 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:55.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:55.901 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.901 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.901 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.901 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.160 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:56.160 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.160 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.160 [] 00:07:56.160 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.160 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:56.160 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:56.160 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:56.160 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:56.160 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:56.160 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.160 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.419 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.419 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:56.419 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.419 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.419 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.419 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:56.419 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.419 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:56.419 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.419 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:56.419 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:56.420 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "06f275ff-6ee5-4cf9-858a-4c2581c35d7f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "06f275ff-6ee5-4cf9-858a-4c2581c35d7f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "172c2ea1-15c0-4e15-b03c-969c29e19c32"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "172c2ea1-15c0-4e15-b03c-969c29e19c32",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0f05ce01-fbb8-4f40-aae6-1c57e9c71190"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0f05ce01-fbb8-4f40-aae6-1c57e9c71190",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "de8e2d35-71be-42c1-bebb-461eba7d8b88"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "de8e2d35-71be-42c1-bebb-461eba7d8b88",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "99687471-1cca-421d-b48e-21bb40d3ebc4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "99687471-1cca-421d-b48e-21bb40d3ebc4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:56.420 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:56.420 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:56.420 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:56.420 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 73828 00:07:56.420 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 73828 ']' 00:07:56.420 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 73828 00:07:56.420 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:07:56.420 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:56.420 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73828 00:07:56.679 killing process with pid 73828 00:07:56.679 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:56.679 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:56.679 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73828' 00:07:56.679 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 73828 00:07:56.679 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 73828 00:07:56.938 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:56.938 17:42:16 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:56.938 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:07:56.938 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.938 17:42:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.938 ************************************ 00:07:56.938 START TEST bdev_hello_world 00:07:56.938 ************************************ 00:07:56.938 17:42:16 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:56.938 [2024-11-05 17:42:16.874239] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:56.938 [2024-11-05 17:42:16.874385] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74432 ] 00:07:57.196 [2024-11-05 17:42:17.008036] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.196 [2024-11-05 17:42:17.040915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.196 [2024-11-05 17:42:17.070541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.761 [2024-11-05 17:42:17.461469] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:57.761 [2024-11-05 17:42:17.461525] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:57.761 [2024-11-05 17:42:17.461561] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:57.761 [2024-11-05 17:42:17.463903] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:57.761 [2024-11-05 17:42:17.464766] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:57.761 [2024-11-05 17:42:17.464801] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:57.761 [2024-11-05 17:42:17.465566] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:57.761 00:07:57.761 [2024-11-05 17:42:17.465601] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:57.761 ************************************ 00:07:57.761 00:07:57.761 real 0m0.845s 00:07:57.761 user 0m0.545s 00:07:57.761 sys 0m0.194s 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:57.761 END TEST bdev_hello_world 00:07:57.761 ************************************ 00:07:57.761 17:42:17 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:57.761 17:42:17 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:57.761 17:42:17 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.761 17:42:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.761 ************************************ 00:07:57.761 START TEST bdev_bounds 00:07:57.761 ************************************ 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:07:57.761 Process bdevio pid: 74463 00:07:57.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74463 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74463' 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74463 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 74463 ']' 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:57.761 17:42:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:57.761 [2024-11-05 17:42:17.752983] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:57.761 [2024-11-05 17:42:17.753139] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74463 ] 00:07:58.164 [2024-11-05 17:42:17.884768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.164 [2024-11-05 17:42:17.911299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.164 [2024-11-05 17:42:17.940903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.164 [2024-11-05 17:42:17.941217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.164 [2024-11-05 17:42:17.941238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.732 17:42:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:58.732 17:42:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:07:58.732 17:42:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:58.732 I/O targets: 00:07:58.732 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:58.732 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:58.732 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:58.732 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.732 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.732 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.732 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:58.732 00:07:58.732 00:07:58.732 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.732 http://cunit.sourceforge.net/ 00:07:58.732 00:07:58.732 00:07:58.732 Suite: bdevio tests on: Nvme3n1 00:07:58.732 Test: blockdev write read block ...passed 00:07:58.732 Test: blockdev write zeroes read block ...passed 00:07:58.732 Test: blockdev write zeroes read no split ...passed 00:07:58.732 Test: blockdev write zeroes read split ...passed 00:07:58.732 Test: blockdev write zeroes read split partial ...passed 00:07:58.732 Test: blockdev reset ...[2024-11-05 17:42:18.696468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:58.732 passed 00:07:58.732 Test: blockdev write read 8 blocks ...[2024-11-05 17:42:18.699917] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:58.732 passed 00:07:58.732 Test: blockdev write read size > 128k ...passed 00:07:58.732 Test: blockdev write read invalid size ...passed 00:07:58.732 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.732 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.732 Test: blockdev write read max offset ...passed 00:07:58.732 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.732 Test: blockdev writev readv 8 blocks ...passed 00:07:58.732 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.732 Test: blockdev writev readv block ...passed 00:07:58.732 Test: blockdev writev readv size > 128k ...passed 00:07:58.732 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.732 Test: blockdev comparev and writev ...[2024-11-05 17:42:18.714687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:58.732 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c9e0e000 len:0x1000 00:07:58.732 [2024-11-05 17:42:18.714930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.732 passed 00:07:58.732 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.732 Test: blockdev nvme admin passthru ...[2024-11-05 17:42:18.715752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.732 [2024-11-05 17:42:18.715816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.732 passed 00:07:58.732 Test: blockdev copy ...passed 00:07:58.732 Suite: bdevio tests on: Nvme2n3 00:07:58.732 Test: blockdev write read block ...passed 00:07:58.732 Test: blockdev write zeroes read block ...passed 00:07:58.993 Test: blockdev write zeroes read no split ...passed 00:07:58.993 Test: blockdev write zeroes read split ...passed 00:07:58.993 Test: blockdev write zeroes read split partial ...passed 00:07:58.993 Test: blockdev reset ...[2024-11-05 17:42:18.734578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:58.993 [2024-11-05 17:42:18.739124] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:58.993 passed 00:07:58.993 Test: blockdev write read 8 blocks ...passed 00:07:58.993 Test: blockdev write read size > 128k ...passed 00:07:58.993 Test: blockdev write read invalid size ...passed 00:07:58.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.994 Test: blockdev write read max offset ...passed 00:07:58.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.994 Test: blockdev writev readv 8 blocks ...passed 00:07:58.994 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.994 Test: blockdev writev readv block ...passed 00:07:58.994 Test: blockdev writev readv size > 128k ...passed 00:07:58.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.994 Test: blockdev comparev and writev ...[2024-11-05 17:42:18.755717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9e0a000 len:0x1000 00:07:58.994 [2024-11-05 17:42:18.755871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.994 passed 00:07:58.994 Test: blockdev nvme passthru rw ...passed 00:07:58.994 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.994 Test: blockdev nvme admin passthru ...[2024-11-05 17:42:18.758565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.994 [2024-11-05 17:42:18.758602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.994 passed 00:07:58.994 Test: blockdev copy ...passed 00:07:58.994 Suite: bdevio tests on: Nvme2n2 00:07:58.994 Test: blockdev write read block ...passed 00:07:58.994 Test: blockdev write zeroes read block ...passed 00:07:58.994 Test: blockdev write zeroes read no split ...passed 00:07:58.994 Test: blockdev write zeroes read split ...passed 00:07:58.994 Test: blockdev write zeroes read split partial ...passed 00:07:58.994 Test: blockdev reset ...[2024-11-05 17:42:18.785270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:58.994 passed 00:07:58.994 Test: blockdev write read 8 blocks ...[2024-11-05 17:42:18.788727] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:58.994 passed 00:07:58.994 Test: blockdev write read size > 128k ...passed 00:07:58.994 Test: blockdev write read invalid size ...passed 00:07:58.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.994 Test: blockdev write read max offset ...passed 00:07:58.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.994 Test: blockdev writev readv 8 blocks ...passed 00:07:58.994 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.994 Test: blockdev writev readv block ...passed 00:07:58.994 Test: blockdev writev readv size > 128k ...passed 00:07:58.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.994 Test: blockdev comparev and writev ...[2024-11-05 17:42:18.803354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1205000 len:0x1000 00:07:58.994 [2024-11-05 17:42:18.803412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.994 passed 00:07:58.994 Test: blockdev nvme passthru rw ...passed 00:07:58.994 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.994 Test: blockdev nvme admin passthru ...[2024-11-05 17:42:18.806257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.994 [2024-11-05 17:42:18.806290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.994 passed 00:07:58.994 Test: blockdev copy ...passed 00:07:58.994 Suite: bdevio tests on: Nvme2n1 00:07:58.994 Test: blockdev write read block ...passed 00:07:58.994 Test: blockdev write zeroes read block ...passed 00:07:58.994 Test: blockdev write zeroes read no split ...passed 00:07:58.994 Test: blockdev write zeroes read split ...passed 00:07:58.994 Test: blockdev write zeroes read split partial ...passed 00:07:58.994 Test: blockdev reset ...[2024-11-05 17:42:18.836702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:58.994 [2024-11-05 17:42:18.839866] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:58.994 Test: blockdev write read 8 blocks ...uccessful. 00:07:58.994 passed 00:07:58.994 Test: blockdev write read size > 128k ...passed 00:07:58.994 Test: blockdev write read invalid size ...passed 00:07:58.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.994 Test: blockdev write read max offset ...passed 00:07:58.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.994 Test: blockdev writev readv 8 blocks ...passed 00:07:58.994 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.994 Test: blockdev writev readv block ...passed 00:07:58.994 Test: blockdev writev readv size > 128k ...passed 00:07:58.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.994 Test: blockdev comparev and writev ...[2024-11-05 17:42:18.856976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca202000 len:0x1000 00:07:58.994 [2024-11-05 17:42:18.857035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.994 passed 00:07:58.994 Test: blockdev nvme passthru rw ...passed 00:07:58.994 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.994 Test: blockdev nvme admin passthru ...[2024-11-05 17:42:18.858662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.994 [2024-11-05 17:42:18.858694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.994 passed 00:07:58.994 Test: blockdev copy ...passed 00:07:58.994 Suite: bdevio tests on: Nvme1n1p2 00:07:58.994 Test: blockdev write read block ...passed 00:07:58.994 Test: blockdev write zeroes read block ...passed 00:07:58.994 Test: blockdev write zeroes read no split ...passed 00:07:58.994 Test: blockdev write zeroes read split ...passed 00:07:58.994 Test: blockdev write zeroes read split partial ...passed 00:07:58.994 Test: blockdev reset ...[2024-11-05 17:42:18.885882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:58.994 passed 00:07:58.994 Test: blockdev write read 8 blocks ...[2024-11-05 17:42:18.888402] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:58.994 passed 00:07:58.994 Test: blockdev write read size > 128k ...passed 00:07:58.994 Test: blockdev write read invalid size ...passed 00:07:58.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.994 Test: blockdev write read max offset ...passed 00:07:58.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.994 Test: blockdev writev readv 8 blocks ...passed 00:07:58.994 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.994 Test: blockdev writev readv block ...passed 00:07:58.994 Test: blockdev writev readv size > 128k ...passed 00:07:58.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.994 Test: blockdev comparev and writev ...[2024-11-05 17:42:18.898136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e463b000 len:0x1000 00:07:58.994 [2024-11-05 17:42:18.898186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.994 passed 00:07:58.994 Test: blockdev nvme passthru rw ...passed 00:07:58.994 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.994 Test: blockdev nvme admin passthru ...passed 00:07:58.994 Test: blockdev copy ...passed 00:07:58.994 Suite: bdevio tests on: Nvme1n1p1 00:07:58.994 Test: blockdev write read block ...passed 00:07:58.994 Test: blockdev write zeroes read block ...passed 00:07:58.994 Test: blockdev write zeroes read no split ...passed 00:07:58.994 Test: blockdev write zeroes read split ...passed 00:07:58.994 Test: blockdev write zeroes read split partial ...passed 00:07:58.994 Test: blockdev reset ...[2024-11-05 17:42:18.914242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:58.994 [2024-11-05 17:42:18.917858] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:58.994 Test: blockdev write read 8 blocks ...uccessful. 00:07:58.994 passed 00:07:58.994 Test: blockdev write read size > 128k ...passed 00:07:58.994 Test: blockdev write read invalid size ...passed 00:07:58.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.994 Test: blockdev write read max offset ...passed 00:07:58.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.994 Test: blockdev writev readv 8 blocks ...passed 00:07:58.994 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.994 Test: blockdev writev readv block ...passed 00:07:58.994 Test: blockdev writev readv size > 128k ...passed 00:07:58.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.994 Test: blockdev comparev and writev ...[2024-11-05 17:42:18.934270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2e4637000 len:0x1000 00:07:58.995 [2024-11-05 17:42:18.934316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.995 passed 00:07:58.995 Test: blockdev nvme passthru rw ...passed 00:07:58.995 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.995 Test: blockdev nvme admin passthru ...passed 00:07:58.995 Test: blockdev copy ...passed 00:07:58.995 Suite: bdevio tests on: Nvme0n1 00:07:58.995 Test: blockdev write read block ...passed 00:07:58.995 Test: blockdev write zeroes read block ...passed 00:07:58.995 Test: blockdev write zeroes read no split ...passed 00:07:58.995 Test: blockdev write zeroes read split ...passed 00:07:58.995 Test: blockdev write zeroes read split partial ...passed 00:07:58.995 Test: blockdev reset ...[2024-11-05 17:42:18.949488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:58.995 passed 00:07:58.995 Test: blockdev write read 8 blocks ...[2024-11-05 17:42:18.951422] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:58.995 passed 00:07:58.995 Test: blockdev write read size > 128k ...passed 00:07:58.995 Test: blockdev write read invalid size ...passed 00:07:58.995 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.995 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.995 Test: blockdev write read max offset ...passed 00:07:58.995 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.995 Test: blockdev writev readv 8 blocks ...passed 00:07:58.995 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.995 Test: blockdev writev readv block ...passed 00:07:58.995 Test: blockdev writev readv size > 128k ...passed 00:07:58.995 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.995 Test: blockdev comparev and writev ...passed 00:07:58.995 Test: blockdev nvme passthru rw ...[2024-11-05 17:42:18.956861] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:58.995 separate metadata which is not supported yet. 00:07:58.995 passed 00:07:58.995 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.995 Test: blockdev nvme admin passthru ...[2024-11-05 17:42:18.957363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:58.995 [2024-11-05 17:42:18.957404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:58.995 passed 00:07:58.995 Test: blockdev copy ...passed 00:07:58.995 00:07:58.995 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.995 suites 7 7 n/a 0 0 00:07:58.995 tests 161 161 161 0 0 00:07:58.995 asserts 1025 1025 1025 0 n/a 00:07:58.995 00:07:58.995 Elapsed time = 0.621 seconds 00:07:58.995 0 00:07:58.995 17:42:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74463 00:07:58.995 17:42:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 74463 ']' 00:07:58.995 17:42:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 74463 00:07:58.995 17:42:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:07:58.995 17:42:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:58.995 17:42:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74463 00:07:59.255 17:42:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:59.255 17:42:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:59.255 17:42:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74463' 00:07:59.255 killing process with pid 74463 00:07:59.255 17:42:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 74463 00:07:59.255 17:42:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 74463 00:07:59.255 17:42:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:59.255 00:07:59.255 real 0m1.502s 00:07:59.255 user 0m3.681s 00:07:59.255 sys 0m0.310s 00:07:59.255 17:42:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.255 17:42:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:59.255 ************************************ 00:07:59.255 END TEST bdev_bounds 00:07:59.255 ************************************ 00:07:59.255 17:42:19 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:59.255 17:42:19 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:59.255 17:42:19 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.255 17:42:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:59.517 ************************************ 00:07:59.517 START TEST bdev_nbd 00:07:59.517 ************************************ 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74512 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74512 /var/tmp/spdk-nbd.sock 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 74512 ']' 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.517 17:42:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:59.517 [2024-11-05 17:42:19.339875] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:07:59.517 [2024-11-05 17:42:19.340031] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.517 [2024-11-05 17:42:19.476946] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.777 [2024-11-05 17:42:19.509377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.778 [2024-11-05 17:42:19.552657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:00.382 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.639 1+0 records in 00:08:00.639 1+0 records out 00:08:00.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653293 s, 6.3 MB/s 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:00.639 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.897 1+0 records in 00:08:00.897 1+0 records out 00:08:00.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539797 s, 7.6 MB/s 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:00.897 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.155 1+0 records in 00:08:01.155 1+0 records out 00:08:01.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509157 s, 8.0 MB/s 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.155 17:42:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.412 1+0 records in 00:08:01.412 1+0 records out 00:08:01.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337377 s, 12.1 MB/s 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:01.412 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.670 1+0 records in 00:08:01.670 1+0 records out 00:08:01.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388412 s, 10.5 MB/s 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.670 1+0 records in 00:08:01.670 1+0 records out 00:08:01.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476091 s, 8.6 MB/s 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.670 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.928 1+0 records in 00:08:01.928 1+0 records out 00:08:01.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483788 s, 8.5 MB/s 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.928 17:42:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.186 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd0", 00:08:02.186 "bdev_name": "Nvme0n1" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd1", 00:08:02.186 "bdev_name": "Nvme1n1p1" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd2", 00:08:02.186 "bdev_name": "Nvme1n1p2" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd3", 00:08:02.186 "bdev_name": "Nvme2n1" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd4", 00:08:02.186 "bdev_name": "Nvme2n2" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd5", 00:08:02.186 "bdev_name": "Nvme2n3" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd6", 00:08:02.186 "bdev_name": "Nvme3n1" 00:08:02.186 } 00:08:02.186 ]' 00:08:02.186 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:02.186 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd0", 00:08:02.186 "bdev_name": "Nvme0n1" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd1", 00:08:02.186 "bdev_name": "Nvme1n1p1" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd2", 00:08:02.186 "bdev_name": "Nvme1n1p2" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd3", 00:08:02.186 "bdev_name": "Nvme2n1" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd4", 00:08:02.186 "bdev_name": "Nvme2n2" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd5", 00:08:02.186 "bdev_name": "Nvme2n3" 00:08:02.186 }, 00:08:02.186 { 00:08:02.186 "nbd_device": "/dev/nbd6", 00:08:02.186 "bdev_name": "Nvme3n1" 00:08:02.186 } 00:08:02.186 ]' 00:08:02.186 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:02.186 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:02.186 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.186 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:02.186 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:02.186 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:02.186 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.186 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:02.444 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:02.444 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:02.444 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:02.444 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.444 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.444 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:02.444 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.444 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.444 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.444 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:02.701 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:02.701 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:02.701 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:02.701 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.701 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.701 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:02.701 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.701 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.701 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.701 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.960 17:42:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:03.219 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:03.219 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:03.219 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:03.219 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.219 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.219 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:03.219 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.219 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.219 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.219 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:03.478 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:03.478 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:03.478 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:03.478 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.478 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.478 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:03.478 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.478 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.478 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.478 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:03.736 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:03.736 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:03.736 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:03.736 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.736 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.736 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:03.736 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.736 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.736 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.736 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.736 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:03.994 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:04.252 /dev/nbd0 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.252 1+0 records in 00:08:04.252 1+0 records out 00:08:04.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414921 s, 9.9 MB/s 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:04.252 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:04.565 /dev/nbd1 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.565 1+0 records in 00:08:04.565 1+0 records out 00:08:04.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730996 s, 5.6 MB/s 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:04.565 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:04.826 /dev/nbd10 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.826 1+0 records in 00:08:04.826 1+0 records out 00:08:04.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493902 s, 8.3 MB/s 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:04.826 /dev/nbd11 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.826 1+0 records in 00:08:04.826 1+0 records out 00:08:04.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057437 s, 7.1 MB/s 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:04.826 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:05.088 /dev/nbd12 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.088 1+0 records in 00:08:05.088 1+0 records out 00:08:05.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119336 s, 3.4 MB/s 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.088 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:05.349 /dev/nbd13 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.349 1+0 records in 00:08:05.349 1+0 records out 00:08:05.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065967 s, 6.2 MB/s 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.349 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:05.610 /dev/nbd14 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.610 1+0 records in 00:08:05.610 1+0 records out 00:08:05.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138751 s, 3.0 MB/s 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.610 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd0", 00:08:05.870 "bdev_name": "Nvme0n1" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd1", 00:08:05.870 "bdev_name": "Nvme1n1p1" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd10", 00:08:05.870 "bdev_name": "Nvme1n1p2" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd11", 00:08:05.870 "bdev_name": "Nvme2n1" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd12", 00:08:05.870 "bdev_name": "Nvme2n2" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd13", 00:08:05.870 "bdev_name": "Nvme2n3" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd14", 00:08:05.870 "bdev_name": "Nvme3n1" 00:08:05.870 } 00:08:05.870 ]' 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd0", 00:08:05.870 "bdev_name": "Nvme0n1" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd1", 00:08:05.870 "bdev_name": "Nvme1n1p1" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd10", 00:08:05.870 "bdev_name": "Nvme1n1p2" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd11", 00:08:05.870 "bdev_name": "Nvme2n1" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd12", 00:08:05.870 "bdev_name": "Nvme2n2" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd13", 00:08:05.870 "bdev_name": "Nvme2n3" 00:08:05.870 }, 00:08:05.870 { 00:08:05.870 "nbd_device": "/dev/nbd14", 00:08:05.870 "bdev_name": "Nvme3n1" 00:08:05.870 } 00:08:05.870 ]' 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:05.870 /dev/nbd1 00:08:05.870 /dev/nbd10 00:08:05.870 /dev/nbd11 00:08:05.870 /dev/nbd12 00:08:05.870 /dev/nbd13 00:08:05.870 /dev/nbd14' 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:05.870 /dev/nbd1 00:08:05.870 /dev/nbd10 00:08:05.870 /dev/nbd11 00:08:05.870 /dev/nbd12 00:08:05.870 /dev/nbd13 00:08:05.870 /dev/nbd14' 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:05.870 256+0 records in 00:08:05.870 256+0 records out 00:08:05.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00802368 s, 131 MB/s 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:05.870 256+0 records in 00:08:05.870 256+0 records out 00:08:05.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.085828 s, 12.2 MB/s 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:05.870 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:06.128 256+0 records in 00:08:06.128 256+0 records out 00:08:06.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.109669 s, 9.6 MB/s 00:08:06.128 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.128 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:06.128 256+0 records in 00:08:06.128 256+0 records out 00:08:06.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0811868 s, 12.9 MB/s 00:08:06.128 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.129 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:06.387 256+0 records in 00:08:06.387 256+0 records out 00:08:06.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136765 s, 7.7 MB/s 00:08:06.387 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.387 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:06.387 256+0 records in 00:08:06.387 256+0 records out 00:08:06.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117841 s, 8.9 MB/s 00:08:06.387 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.387 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:06.645 256+0 records in 00:08:06.645 256+0 records out 00:08:06.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0806048 s, 13.0 MB/s 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:06.646 256+0 records in 00:08:06.646 256+0 records out 00:08:06.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116533 s, 9.0 MB/s 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.646 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:06.904 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:06.904 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:06.904 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:06.904 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.904 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.904 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:06.904 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.904 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.904 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.904 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:07.161 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:07.161 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:07.161 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:07.161 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.161 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.161 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:07.161 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.161 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.161 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.161 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.419 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.677 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:07.677 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:07.677 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:07.677 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:07.677 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.677 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.677 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:07.677 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.677 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.677 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.677 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:07.935 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:07.935 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:07.935 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:07.935 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.935 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.935 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:07.935 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.935 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.935 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.935 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:08.195 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:08.195 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:08.195 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:08.195 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.195 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.195 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:08.195 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.195 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.195 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.196 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.196 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:08.455 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:08.714 malloc_lvol_verify 00:08:08.714 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:08.715 48ccf3a5-88f0-48e4-9334-ec7e176021ed 00:08:09.008 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:09.008 128be363-5537-4277-b140-4d99fd09f8b4 00:08:09.008 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:09.266 /dev/nbd0 00:08:09.266 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:09.267 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:09.267 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:09.267 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:09.267 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:09.267 mke2fs 1.47.0 (5-Feb-2023) 00:08:09.267 Discarding device blocks: 0/4096 done 00:08:09.267 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:09.267 00:08:09.267 Allocating group tables: 0/1 done 00:08:09.267 Writing inode tables: 0/1 done 00:08:09.267 Creating journal (1024 blocks): done 00:08:09.267 Writing superblocks and filesystem accounting information: 0/1 done 00:08:09.267 00:08:09.267 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:09.267 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.267 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:09.267 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.267 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:09.267 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.267 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74512 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 74512 ']' 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 74512 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74512 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:09.526 killing process with pid 74512 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74512' 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 74512 00:08:09.526 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 74512 00:08:09.788 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:09.788 00:08:09.788 real 0m10.294s 00:08:09.788 user 0m14.784s 00:08:09.788 sys 0m3.657s 00:08:09.788 ************************************ 00:08:09.788 END TEST bdev_nbd 00:08:09.788 ************************************ 00:08:09.788 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.788 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:09.788 17:42:29 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:09.788 17:42:29 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:09.788 skipping fio tests on NVMe due to multi-ns failures. 00:08:09.788 17:42:29 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:09.788 17:42:29 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:09.788 17:42:29 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:09.788 17:42:29 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:09.788 17:42:29 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:09.788 17:42:29 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.788 17:42:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.788 ************************************ 00:08:09.788 START TEST bdev_verify 00:08:09.788 ************************************ 00:08:09.788 17:42:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:09.788 [2024-11-05 17:42:29.685445] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:08:09.788 [2024-11-05 17:42:29.685571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74917 ] 00:08:10.047 [2024-11-05 17:42:29.819279] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.047 [2024-11-05 17:42:29.849524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.047 [2024-11-05 17:42:29.872225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.047 [2024-11-05 17:42:29.872320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.612 Running I/O for 5 seconds... 00:08:12.930 20096.00 IOPS, 78.50 MiB/s [2024-11-05T17:42:33.899Z] 20864.00 IOPS, 81.50 MiB/s [2024-11-05T17:42:34.841Z] 20586.67 IOPS, 80.42 MiB/s [2024-11-05T17:42:35.412Z] 20320.00 IOPS, 79.38 MiB/s [2024-11-05T17:42:35.673Z] 20364.80 IOPS, 79.55 MiB/s 00:08:15.682 Latency(us) 00:08:15.682 [2024-11-05T17:42:35.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.682 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x0 length 0xbd0bd 00:08:15.682 Nvme0n1 : 5.06 1417.78 5.54 0.00 0.00 90079.11 16232.76 77836.60 00:08:15.682 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:15.682 Nvme0n1 : 5.05 1444.83 5.64 0.00 0.00 88323.83 18148.43 80256.39 00:08:15.682 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x0 length 0x4ff80 00:08:15.682 Nvme1n1p1 : 5.06 1417.36 5.54 0.00 0.00 89979.27 15526.99 75416.81 00:08:15.682 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:15.682 Nvme1n1p1 : 5.05 1444.35 5.64 0.00 0.00 88090.75 21677.29 74610.22 00:08:15.682 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x0 length 0x4ff7f 00:08:15.682 Nvme1n1p2 : 5.06 1416.93 5.53 0.00 0.00 89854.07 14922.04 73400.32 00:08:15.682 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:15.682 Nvme1n1p2 : 5.07 1451.70 5.67 0.00 0.00 87434.12 6503.19 72997.02 00:08:15.682 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x0 length 0x80000 00:08:15.682 Nvme2n1 : 5.06 1416.53 5.53 0.00 0.00 89745.10 15325.34 68964.04 00:08:15.682 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x80000 length 0x80000 00:08:15.682 Nvme2n1 : 5.07 1450.43 5.67 0.00 0.00 87301.68 10032.05 72593.72 00:08:15.682 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x0 length 0x80000 00:08:15.682 Nvme2n2 : 5.06 1416.15 5.53 0.00 0.00 89614.93 15325.34 68964.04 00:08:15.682 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x80000 length 0x80000 00:08:15.682 Nvme2n2 : 5.09 1459.25 5.70 0.00 0.00 86749.02 10889.06 71787.13 00:08:15.682 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x0 length 0x80000 00:08:15.682 Nvme2n3 : 5.06 1415.75 5.53 0.00 0.00 89475.40 14821.22 74610.22 00:08:15.682 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x80000 length 0x80000 00:08:15.682 Nvme2n3 : 5.09 1458.86 5.70 0.00 0.00 86638.57 11443.59 73400.32 00:08:15.682 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x0 length 0x20000 00:08:15.682 Nvme3n1 : 5.07 1426.44 5.57 0.00 0.00 88700.81 2054.30 77433.30 00:08:15.682 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.682 Verification LBA range: start 0x20000 length 0x20000 00:08:15.682 Nvme3n1 : 5.09 1458.47 5.70 0.00 0.00 86559.98 11443.59 77836.60 00:08:15.682 [2024-11-05T17:42:35.673Z] =================================================================================================================== 00:08:15.682 [2024-11-05T17:42:35.673Z] Total : 20094.82 78.50 0.00 0.00 88449.25 2054.30 80256.39 00:08:16.252 00:08:16.252 real 0m6.520s 00:08:16.252 user 0m12.261s 00:08:16.252 sys 0m0.251s 00:08:16.252 17:42:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.252 ************************************ 00:08:16.252 END TEST bdev_verify 00:08:16.252 ************************************ 00:08:16.252 17:42:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:16.252 17:42:36 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.252 17:42:36 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:16.252 17:42:36 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.252 17:42:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.253 ************************************ 00:08:16.253 START TEST bdev_verify_big_io 00:08:16.253 ************************************ 00:08:16.253 17:42:36 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.512 [2024-11-05 17:42:36.265982] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:08:16.512 [2024-11-05 17:42:36.266144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75009 ] 00:08:16.512 [2024-11-05 17:42:36.400227] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.512 [2024-11-05 17:42:36.428423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.512 [2024-11-05 17:42:36.459999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.512 [2024-11-05 17:42:36.460102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.082 Running I/O for 5 seconds... 00:08:23.183 1197.00 IOPS, 74.81 MiB/s [2024-11-05T17:42:43.174Z] 3338.00 IOPS, 208.62 MiB/s 00:08:23.183 Latency(us) 00:08:23.183 [2024-11-05T17:42:43.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.183 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.183 Verification LBA range: start 0x0 length 0xbd0b 00:08:23.183 Nvme0n1 : 6.06 73.93 4.62 0.00 0.00 1677666.91 28230.89 1703532.70 00:08:23.183 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.183 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:23.183 Nvme0n1 : 6.00 106.69 6.67 0.00 0.00 1151577.09 23088.84 1238932.87 00:08:23.183 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.183 Verification LBA range: start 0x0 length 0x4ff8 00:08:23.183 Nvme1n1p1 : 6.06 80.65 5.04 0.00 0.00 1493600.39 45774.38 1497043.89 00:08:23.183 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.183 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:23.183 Nvme1n1p1 : 6.00 107.00 6.69 0.00 0.00 1111855.54 108890.58 1109877.37 00:08:23.184 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.184 Verification LBA range: start 0x0 length 0x4ff7 00:08:23.184 Nvme1n1p2 : 6.06 80.80 5.05 0.00 0.00 1446394.54 46379.32 1529307.77 00:08:23.184 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.184 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:23.184 Nvme1n1p2 : 6.00 107.32 6.71 0.00 0.00 1075424.15 109697.18 1116330.14 00:08:23.184 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.184 Verification LBA range: start 0x0 length 0x8000 00:08:23.184 Nvme2n1 : 6.07 80.78 5.05 0.00 0.00 1401232.11 45371.08 1555118.87 00:08:23.184 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.184 Verification LBA range: start 0x8000 length 0x8000 00:08:23.184 Nvme2n1 : 6.00 110.09 6.88 0.00 0.00 1027442.49 64124.46 993727.41 00:08:23.184 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.184 Verification LBA range: start 0x0 length 0x8000 00:08:23.184 Nvme2n2 : 6.07 84.38 5.27 0.00 0.00 1309301.37 58881.58 1587382.74 00:08:23.184 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.184 Verification LBA range: start 0x8000 length 0x8000 00:08:23.184 Nvme2n2 : 6.06 107.09 6.69 0.00 0.00 1024168.36 54041.99 1871304.86 00:08:23.184 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.184 Verification LBA range: start 0x0 length 0x8000 00:08:23.184 Nvme2n3 : 6.07 84.34 5.27 0.00 0.00 1265496.22 60494.77 1626099.40 00:08:23.184 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.184 Verification LBA range: start 0x8000 length 0x8000 00:08:23.184 Nvme2n3 : 6.08 113.19 7.07 0.00 0.00 941354.37 13409.67 1716438.25 00:08:23.184 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:23.184 Verification LBA range: start 0x0 length 0x2000 00:08:23.184 Nvme3n1 : 6.11 100.61 6.29 0.00 0.00 1033771.36 4209.43 1677721.60 00:08:23.184 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:23.184 Verification LBA range: start 0x2000 length 0x2000 00:08:23.184 Nvme3n1 : 6.12 133.39 8.34 0.00 0.00 775100.95 630.15 1974549.27 00:08:23.184 [2024-11-05T17:42:43.175Z] =================================================================================================================== 00:08:23.184 [2024-11-05T17:42:43.175Z] Total : 1370.26 85.64 0.00 0.00 1156983.97 630.15 1974549.27 00:08:24.116 00:08:24.116 real 0m7.621s 00:08:24.116 user 0m14.405s 00:08:24.116 sys 0m0.279s 00:08:24.116 17:42:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.116 ************************************ 00:08:24.116 END TEST bdev_verify_big_io 00:08:24.116 ************************************ 00:08:24.116 17:42:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:24.116 17:42:43 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.116 17:42:43 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:24.116 17:42:43 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.116 17:42:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.116 ************************************ 00:08:24.116 START TEST bdev_write_zeroes 00:08:24.116 ************************************ 00:08:24.116 17:42:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.116 [2024-11-05 17:42:43.934944] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:08:24.116 [2024-11-05 17:42:43.935042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75113 ] 00:08:24.116 [2024-11-05 17:42:44.059900] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.116 [2024-11-05 17:42:44.080765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.116 [2024-11-05 17:42:44.103780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.683 Running I/O for 1 seconds... 00:08:25.618 62720.00 IOPS, 245.00 MiB/s 00:08:25.618 Latency(us) 00:08:25.618 [2024-11-05T17:42:45.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.618 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.618 Nvme0n1 : 1.02 8946.95 34.95 0.00 0.00 14271.66 6654.42 25710.28 00:08:25.618 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.618 Nvme1n1p1 : 1.02 8935.94 34.91 0.00 0.00 14269.40 9427.10 25710.28 00:08:25.618 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.618 Nvme1n1p2 : 1.03 8924.89 34.86 0.00 0.00 14245.47 9175.04 24500.38 00:08:25.618 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.618 Nvme2n1 : 1.03 8914.84 34.82 0.00 0.00 14238.81 8922.98 24802.86 00:08:25.618 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.618 Nvme2n2 : 1.03 8904.79 34.78 0.00 0.00 14235.21 9074.22 24197.91 00:08:25.618 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.618 Nvme2n3 : 1.03 8894.79 34.75 0.00 0.00 14230.37 8670.92 24197.91 00:08:25.618 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:25.618 Nvme3n1 : 1.03 8884.80 34.71 0.00 0.00 14222.91 8166.79 25811.10 00:08:25.618 [2024-11-05T17:42:45.609Z] =================================================================================================================== 00:08:25.618 [2024-11-05T17:42:45.609Z] Total : 62407.00 243.78 0.00 0.00 14244.83 6654.42 25811.10 00:08:25.879 00:08:25.879 real 0m1.882s 00:08:25.879 user 0m1.586s 00:08:25.879 sys 0m0.184s 00:08:25.879 17:42:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:25.879 ************************************ 00:08:25.879 END TEST bdev_write_zeroes 00:08:25.879 ************************************ 00:08:25.879 17:42:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:25.879 17:42:45 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.879 17:42:45 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:25.879 17:42:45 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:25.879 17:42:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:25.879 ************************************ 00:08:25.879 START TEST bdev_json_nonenclosed 00:08:25.879 ************************************ 00:08:25.879 17:42:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.140 [2024-11-05 17:42:45.891867] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:08:26.140 [2024-11-05 17:42:45.892001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75155 ] 00:08:26.140 [2024-11-05 17:42:46.026432] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.140 [2024-11-05 17:42:46.058111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.140 [2024-11-05 17:42:46.098696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.140 [2024-11-05 17:42:46.098850] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:26.140 [2024-11-05 17:42:46.098874] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:26.140 [2024-11-05 17:42:46.098891] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.400 00:08:26.400 real 0m0.373s 00:08:26.400 user 0m0.158s 00:08:26.400 sys 0m0.111s 00:08:26.400 17:42:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.400 17:42:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:26.400 ************************************ 00:08:26.400 END TEST bdev_json_nonenclosed 00:08:26.400 ************************************ 00:08:26.400 17:42:46 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.400 17:42:46 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:26.400 17:42:46 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:26.400 17:42:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.400 ************************************ 00:08:26.400 START TEST bdev_json_nonarray 00:08:26.400 ************************************ 00:08:26.400 17:42:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.400 [2024-11-05 17:42:46.321361] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:08:26.400 [2024-11-05 17:42:46.321486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75175 ] 00:08:26.658 [2024-11-05 17:42:46.450756] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.658 [2024-11-05 17:42:46.481769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.658 [2024-11-05 17:42:46.505131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.658 [2024-11-05 17:42:46.505221] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:26.658 [2024-11-05 17:42:46.505238] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:26.658 [2024-11-05 17:42:46.505248] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.658 00:08:26.658 real 0m0.321s 00:08:26.658 user 0m0.128s 00:08:26.658 sys 0m0.090s 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:26.658 ************************************ 00:08:26.658 END TEST bdev_json_nonarray 00:08:26.658 ************************************ 00:08:26.658 17:42:46 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:26.658 17:42:46 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:26.658 17:42:46 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:26.658 17:42:46 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:26.658 17:42:46 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:26.658 17:42:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.658 ************************************ 00:08:26.658 START TEST bdev_gpt_uuid 00:08:26.658 ************************************ 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75195 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75195 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 75195 ']' 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:26.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:26.658 17:42:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:26.917 [2024-11-05 17:42:46.692792] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:08:26.917 [2024-11-05 17:42:46.692899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75195 ] 00:08:26.917 [2024-11-05 17:42:46.821583] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.917 [2024-11-05 17:42:46.844229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.917 [2024-11-05 17:42:46.868506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.852 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.852 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:08:27.852 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:27.852 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.852 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:28.111 Some configs were skipped because the RPC state that can call them passed over. 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:28.111 { 00:08:28.111 "name": "Nvme1n1p1", 00:08:28.111 "aliases": [ 00:08:28.111 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:28.111 ], 00:08:28.111 "product_name": "GPT Disk", 00:08:28.111 "block_size": 4096, 00:08:28.111 "num_blocks": 655104, 00:08:28.111 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:28.111 "assigned_rate_limits": { 00:08:28.111 "rw_ios_per_sec": 0, 00:08:28.111 "rw_mbytes_per_sec": 0, 00:08:28.111 "r_mbytes_per_sec": 0, 00:08:28.111 "w_mbytes_per_sec": 0 00:08:28.111 }, 00:08:28.111 "claimed": false, 00:08:28.111 "zoned": false, 00:08:28.111 "supported_io_types": { 00:08:28.111 "read": true, 00:08:28.111 "write": true, 00:08:28.111 "unmap": true, 00:08:28.111 "flush": true, 00:08:28.111 "reset": true, 00:08:28.111 "nvme_admin": false, 00:08:28.111 "nvme_io": false, 00:08:28.111 "nvme_io_md": false, 00:08:28.111 "write_zeroes": true, 00:08:28.111 "zcopy": false, 00:08:28.111 "get_zone_info": false, 00:08:28.111 "zone_management": false, 00:08:28.111 "zone_append": false, 00:08:28.111 "compare": true, 00:08:28.111 "compare_and_write": false, 00:08:28.111 "abort": true, 00:08:28.111 "seek_hole": false, 00:08:28.111 "seek_data": false, 00:08:28.111 "copy": true, 00:08:28.111 "nvme_iov_md": false 00:08:28.111 }, 00:08:28.111 "driver_specific": { 00:08:28.111 "gpt": { 00:08:28.111 "base_bdev": "Nvme1n1", 00:08:28.111 "offset_blocks": 256, 00:08:28.111 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:28.111 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:28.111 "partition_name": "SPDK_TEST_first" 00:08:28.111 } 00:08:28.111 } 00:08:28.111 } 00:08:28.111 ]' 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:28.111 { 00:08:28.111 "name": "Nvme1n1p2", 00:08:28.111 "aliases": [ 00:08:28.111 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:28.111 ], 00:08:28.111 "product_name": "GPT Disk", 00:08:28.111 "block_size": 4096, 00:08:28.111 "num_blocks": 655103, 00:08:28.111 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:28.111 "assigned_rate_limits": { 00:08:28.111 "rw_ios_per_sec": 0, 00:08:28.111 "rw_mbytes_per_sec": 0, 00:08:28.111 "r_mbytes_per_sec": 0, 00:08:28.111 "w_mbytes_per_sec": 0 00:08:28.111 }, 00:08:28.111 "claimed": false, 00:08:28.111 "zoned": false, 00:08:28.111 "supported_io_types": { 00:08:28.111 "read": true, 00:08:28.111 "write": true, 00:08:28.111 "unmap": true, 00:08:28.111 "flush": true, 00:08:28.111 "reset": true, 00:08:28.111 "nvme_admin": false, 00:08:28.111 "nvme_io": false, 00:08:28.111 "nvme_io_md": false, 00:08:28.111 "write_zeroes": true, 00:08:28.111 "zcopy": false, 00:08:28.111 "get_zone_info": false, 00:08:28.111 "zone_management": false, 00:08:28.111 "zone_append": false, 00:08:28.111 "compare": true, 00:08:28.111 "compare_and_write": false, 00:08:28.111 "abort": true, 00:08:28.111 "seek_hole": false, 00:08:28.111 "seek_data": false, 00:08:28.111 "copy": true, 00:08:28.111 "nvme_iov_md": false 00:08:28.111 }, 00:08:28.111 "driver_specific": { 00:08:28.111 "gpt": { 00:08:28.111 "base_bdev": "Nvme1n1", 00:08:28.111 "offset_blocks": 655360, 00:08:28.111 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:28.111 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:28.111 "partition_name": "SPDK_TEST_second" 00:08:28.111 } 00:08:28.111 } 00:08:28.111 } 00:08:28.111 ]' 00:08:28.111 17:42:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:28.111 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:28.111 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:28.111 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:28.111 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 75195 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 75195 ']' 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 75195 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75195 00:08:28.112 killing process with pid 75195 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75195' 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 75195 00:08:28.112 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 75195 00:08:28.677 00:08:28.677 real 0m1.796s 00:08:28.677 user 0m1.925s 00:08:28.677 sys 0m0.369s 00:08:28.677 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.677 17:42:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:28.677 ************************************ 00:08:28.677 END TEST bdev_gpt_uuid 00:08:28.677 ************************************ 00:08:28.677 17:42:48 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:28.677 17:42:48 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:28.677 17:42:48 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:28.677 17:42:48 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:28.677 17:42:48 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:28.677 17:42:48 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:28.677 17:42:48 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:28.677 17:42:48 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:28.677 17:42:48 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:28.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:29.193 Waiting for block devices as requested 00:08:29.193 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:29.193 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:29.193 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:29.193 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:34.458 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:34.458 17:42:54 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:34.458 17:42:54 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:34.717 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:34.717 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:34.717 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:34.717 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:34.717 17:42:54 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:34.717 00:08:34.717 real 0m48.730s 00:08:34.717 user 1m1.316s 00:08:34.717 sys 0m8.210s 00:08:34.717 17:42:54 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:34.717 ************************************ 00:08:34.717 END TEST blockdev_nvme_gpt 00:08:34.717 ************************************ 00:08:34.717 17:42:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:34.717 17:42:54 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:34.717 17:42:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:34.717 17:42:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.717 17:42:54 -- common/autotest_common.sh@10 -- # set +x 00:08:34.717 ************************************ 00:08:34.717 START TEST nvme 00:08:34.717 ************************************ 00:08:34.717 17:42:54 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:34.717 * Looking for test storage... 00:08:34.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:34.717 17:42:54 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:34.717 17:42:54 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:08:34.717 17:42:54 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:34.717 17:42:54 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:34.717 17:42:54 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.717 17:42:54 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.717 17:42:54 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.717 17:42:54 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.717 17:42:54 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.717 17:42:54 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.717 17:42:54 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.717 17:42:54 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.717 17:42:54 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.717 17:42:54 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.717 17:42:54 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.717 17:42:54 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:34.717 17:42:54 nvme -- scripts/common.sh@345 -- # : 1 00:08:34.717 17:42:54 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.717 17:42:54 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.717 17:42:54 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:34.717 17:42:54 nvme -- scripts/common.sh@353 -- # local d=1 00:08:34.717 17:42:54 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.717 17:42:54 nvme -- scripts/common.sh@355 -- # echo 1 00:08:34.717 17:42:54 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.717 17:42:54 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:34.717 17:42:54 nvme -- scripts/common.sh@353 -- # local d=2 00:08:34.717 17:42:54 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.717 17:42:54 nvme -- scripts/common.sh@355 -- # echo 2 00:08:34.717 17:42:54 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.717 17:42:54 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.717 17:42:54 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.717 17:42:54 nvme -- scripts/common.sh@368 -- # return 0 00:08:34.717 17:42:54 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.717 17:42:54 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:34.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.717 --rc genhtml_branch_coverage=1 00:08:34.717 --rc genhtml_function_coverage=1 00:08:34.717 --rc genhtml_legend=1 00:08:34.717 --rc geninfo_all_blocks=1 00:08:34.717 --rc geninfo_unexecuted_blocks=1 00:08:34.717 00:08:34.717 ' 00:08:34.717 17:42:54 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:34.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.717 --rc genhtml_branch_coverage=1 00:08:34.717 --rc genhtml_function_coverage=1 00:08:34.717 --rc genhtml_legend=1 00:08:34.717 --rc geninfo_all_blocks=1 00:08:34.717 --rc geninfo_unexecuted_blocks=1 00:08:34.717 00:08:34.717 ' 00:08:34.717 17:42:54 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:34.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.717 --rc genhtml_branch_coverage=1 00:08:34.717 --rc genhtml_function_coverage=1 00:08:34.717 --rc genhtml_legend=1 00:08:34.717 --rc geninfo_all_blocks=1 00:08:34.717 --rc geninfo_unexecuted_blocks=1 00:08:34.717 00:08:34.717 ' 00:08:34.717 17:42:54 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:34.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.717 --rc genhtml_branch_coverage=1 00:08:34.717 --rc genhtml_function_coverage=1 00:08:34.717 --rc genhtml_legend=1 00:08:34.717 --rc geninfo_all_blocks=1 00:08:34.717 --rc geninfo_unexecuted_blocks=1 00:08:34.717 00:08:34.717 ' 00:08:34.717 17:42:54 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:35.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:35.851 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:35.851 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:35.851 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:35.851 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:35.851 17:42:55 nvme -- nvme/nvme.sh@79 -- # uname 00:08:35.851 17:42:55 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:35.851 17:42:55 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:35.851 17:42:55 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:35.851 17:42:55 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:35.851 17:42:55 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:08:35.851 17:42:55 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:08:35.851 17:42:55 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:35.851 17:42:55 nvme -- common/autotest_common.sh@1073 -- # stubpid=75819 00:08:35.851 17:42:55 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:08:35.851 Waiting for stub to ready for secondary processes... 00:08:35.851 17:42:55 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:35.851 17:42:55 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/75819 ]] 00:08:35.851 17:42:55 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:08:35.851 [2024-11-05 17:42:55.701461] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:08:35.852 [2024-11-05 17:42:55.701668] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:36.795 [2024-11-05 17:42:56.417435] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.795 [2024-11-05 17:42:56.448145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.795 [2024-11-05 17:42:56.460950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.795 [2024-11-05 17:42:56.461313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.795 [2024-11-05 17:42:56.461226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.795 [2024-11-05 17:42:56.472416] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:36.795 [2024-11-05 17:42:56.472455] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:36.795 [2024-11-05 17:42:56.488062] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:36.795 [2024-11-05 17:42:56.488543] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:36.795 [2024-11-05 17:42:56.490350] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:36.795 [2024-11-05 17:42:56.491007] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:36.795 [2024-11-05 17:42:56.491186] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:36.795 [2024-11-05 17:42:56.493155] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:36.795 [2024-11-05 17:42:56.493493] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:36.795 [2024-11-05 17:42:56.493629] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:36.795 [2024-11-05 17:42:56.495726] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:36.795 [2024-11-05 17:42:56.495916] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:36.795 [2024-11-05 17:42:56.495965] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:36.795 [2024-11-05 17:42:56.496002] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:36.795 [2024-11-05 17:42:56.496045] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:36.795 done. 00:08:36.795 17:42:56 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:36.795 17:42:56 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:08:36.795 17:42:56 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:36.795 17:42:56 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:08:36.795 17:42:56 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.795 17:42:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.795 ************************************ 00:08:36.795 START TEST nvme_reset 00:08:36.795 ************************************ 00:08:36.795 17:42:56 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:37.056 Initializing NVMe Controllers 00:08:37.056 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:37.057 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:37.057 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:37.057 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:37.057 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:37.057 00:08:37.057 real 0m0.213s 00:08:37.057 user 0m0.061s 00:08:37.057 sys 0m0.100s 00:08:37.057 17:42:56 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.057 17:42:56 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:37.057 ************************************ 00:08:37.057 END TEST nvme_reset 00:08:37.057 ************************************ 00:08:37.057 17:42:56 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:37.057 17:42:56 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:37.057 17:42:56 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:37.057 17:42:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.057 ************************************ 00:08:37.057 START TEST nvme_identify 00:08:37.057 ************************************ 00:08:37.057 17:42:56 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:08:37.057 17:42:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:37.057 17:42:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:37.057 17:42:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:37.057 17:42:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:37.057 17:42:56 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:37.057 17:42:56 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:37.057 17:42:56 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:37.057 17:42:56 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.057 17:42:56 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:37.057 17:42:57 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:37.057 17:42:57 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:37.057 17:42:57 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:37.321 ===================================================== 00:08:37.321 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.321 ===================================================== 00:08:37.321 Controller Capabilities/Features 00:08:37.321 ================================ 00:08:37.321 Vendor ID: 1b36 00:08:37.321 Subsystem Vendor ID: 1af4 00:08:37.321 Serial Number: 12340 00:08:37.321 Model Number: QEMU NVMe Ctrl 00:08:37.321 Firmware Version: 8.0.0 00:08:37.321 Recommended Arb Burst: 6 00:08:37.321 IEEE OUI Identifier: 00 54 52 00:08:37.321 Multi-path I/O 00:08:37.321 May have multiple subsystem ports: No 00:08:37.321 May have multiple controllers: No 00:08:37.321 Associated with SR-IOV VF: No 00:08:37.321 Max Data Transfer Size: 524288 00:08:37.321 Max Number of Namespaces: 256 00:08:37.321 Max Number of I/O Queues: 64 00:08:37.321 NVMe Specification Version (VS): 1.4 00:08:37.321 NVMe Specification Version (Identify): 1.4 00:08:37.321 Maximum Queue Entries: 2048 00:08:37.321 Contiguous Queues Required: Yes 00:08:37.321 Arbitration Mechanisms Supported 00:08:37.321 Weighted Round Robin: Not Supported 00:08:37.321 Vendor Specific: Not Supported 00:08:37.321 Reset Timeout: 7500 ms 00:08:37.321 Doorbell Stride: 4 bytes 00:08:37.321 NVM Subsystem Reset: Not Supported 00:08:37.321 Command Sets Supported 00:08:37.321 NVM Command Set: Supported 00:08:37.321 Boot Partition: Not Supported 00:08:37.321 Memory Page Size Minimum: 4096 bytes 00:08:37.321 Memory Page Size Maximum: 65536 bytes 00:08:37.321 Persistent Memory Region: Not Supported 00:08:37.321 Optional Asynchronous Events Supported 00:08:37.321 Namespace Attribute Notices: Supported 00:08:37.321 Firmware Activation Notices: Not Supported 00:08:37.321 ANA Change Notices: Not Supported 00:08:37.321 PLE Aggregate Log Change Notices: Not Supported 00:08:37.321 LBA Status Info Alert Notices: Not Supported 00:08:37.321 EGE Aggregate Log Change Notices: Not Supported 00:08:37.321 Normal NVM Subsystem Shutdown event: Not Supported 00:08:37.321 Zone Descriptor Change Notices: Not Supported 00:08:37.321 Discovery Log Change Notices: Not Supported 00:08:37.321 Controller Attributes 00:08:37.321 128-bit Host Identifier: Not Supported 00:08:37.321 Non-Operational Permissive Mode: Not Supported 00:08:37.321 NVM Sets: Not Supported 00:08:37.321 Read Recovery Levels: Not Supported 00:08:37.321 Endurance Groups: Not Supported 00:08:37.321 Predictable Latency Mode: Not Supported 00:08:37.321 Traffic Based Keep ALive: Not Supported 00:08:37.321 Namespace Granularity: Not Supported 00:08:37.321 SQ Associations: Not Supported 00:08:37.321 UUID List: Not Supported 00:08:37.321 Multi-Domain Subsystem: Not Supported 00:08:37.321 Fixed Capacity Management: Not Supported 00:08:37.321 Variable Capacity Management: Not Supported 00:08:37.321 Delete Endurance Group: Not Supported 00:08:37.321 Delete NVM Set: Not Supported 00:08:37.321 Extended LBA Formats Supported: Supported 00:08:37.321 Flexible Data Placement Supported: Not Supported 00:08:37.321 00:08:37.321 Controller Memory Buffer Support 00:08:37.321 ================================ 00:08:37.321 Supported: No 00:08:37.321 00:08:37.321 Persistent Memory Region Support 00:08:37.321 ================================ 00:08:37.321 Supported: No 00:08:37.321 00:08:37.321 Admin Command Set Attributes 00:08:37.321 ============================ 00:08:37.321 Security Send/Receive: Not Supported 00:08:37.321 Format NVM: Supported 00:08:37.321 Firmware Activate/Download: Not Supported 00:08:37.321 Namespace Management: Supported 00:08:37.321 Device Self-Test: Not Supported 00:08:37.321 Directives: Supported 00:08:37.321 NVMe-MI: Not Supported 00:08:37.321 Virtualization Management: Not Supported 00:08:37.321 Doorbell Buffer Config: Supported 00:08:37.321 Get LBA Status Capability: Not Supported 00:08:37.321 Command & Feature Lockdown Capability: Not Supported 00:08:37.321 Abort Command Limit: 4 00:08:37.321 Async Event Request Limit: 4 00:08:37.321 Number of Firmware Slots: N/A 00:08:37.321 Firmware Slot 1 Read-Only: N/A 00:08:37.321 Firmware Activation Without Reset: N/A 00:08:37.321 Multiple Update Detection Support: N/A 00:08:37.321 Firmware Update Granularity: No Information Provided 00:08:37.321 Per-Namespace SMART Log: Yes 00:08:37.321 Asymmetric Namespace Access Log Page: Not Supported 00:08:37.321 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:37.321 Command Effects Log Page: Supported 00:08:37.321 Get Log Page Extended Data: Supported 00:08:37.321 Telemetry Log Pages: Not Supported 00:08:37.321 Persistent Event Log Pages: Not Supported 00:08:37.321 Supported Log Pages Log Page: May Support 00:08:37.321 Commands Supported & Effects Log Page: Not Supported 00:08:37.321 Feature Identifiers & Effects Log Page:May Support 00:08:37.321 NVMe-MI Commands & Effects Log Page: May Support 00:08:37.321 Data Area 4 for Telemetry Log: Not Supported 00:08:37.321 Error Log Page Entries Supported: 1 00:08:37.321 Keep Alive: Not Supported 00:08:37.321 00:08:37.321 NVM Command Set Attributes 00:08:37.321 ========================== 00:08:37.321 Submission Queue Entry Size 00:08:37.321 Max: 64 00:08:37.321 Min: 64 00:08:37.321 Completion Queue Entry Size 00:08:37.321 Max: 16 00:08:37.321 Min: 16 00:08:37.321 Number of Namespaces: 256 00:08:37.321 Compare Command: Supported 00:08:37.321 Write Uncorrectable Command: Not Supported 00:08:37.321 Dataset Management Command: Supported 00:08:37.321 Write Zeroes Command: Supported 00:08:37.321 Set Features Save Field: Supported 00:08:37.321 Reservations: Not Supported 00:08:37.321 Timestamp: Supported 00:08:37.321 Copy: Supported 00:08:37.321 Volatile Write Cache: Present 00:08:37.321 Atomic Write Unit (Normal): 1 00:08:37.321 Atomic Write Unit (PFail): 1 00:08:37.321 Atomic Compare & Write Unit: 1 00:08:37.321 Fused Compare & Write: Not Supported 00:08:37.321 Scatter-Gather List 00:08:37.321 SGL Command Set: Supported 00:08:37.321 SGL Keyed: Not Supported 00:08:37.321 SGL Bit Bucket Descriptor: Not Supported 00:08:37.321 SGL Metadata Pointer: Not Supported 00:08:37.321 Oversized SGL: Not Supported 00:08:37.321 SGL Metadata Address: Not Supported 00:08:37.321 SGL Offset: Not Supported 00:08:37.321 Transport SGL Data Block: Not Supported 00:08:37.321 Replay Protected Memory Block: Not Supported 00:08:37.321 00:08:37.321 Firmware Slot Information 00:08:37.321 ========================= 00:08:37.321 Active slot: 1 00:08:37.321 Slot 1 Firmware Revision: 1.0 00:08:37.321 00:08:37.322 00:08:37.322 Commands Supported and Effects 00:08:37.322 ============================== 00:08:37.322 Admin Commands 00:08:37.322 -------------- 00:08:37.322 Delete I/O Submission Queue (00h): Supported 00:08:37.322 Create I/O Submission Queue (01h): Supported 00:08:37.322 Get Log Page (02h): Supported 00:08:37.322 Delete I/O Completion Queue (04h): Supported 00:08:37.322 Create I/O Completion Queue (05h): Supported 00:08:37.322 Identify (06h): Supported 00:08:37.322 Abort (08h): Supported 00:08:37.322 Set Features (09h): Supported 00:08:37.322 Get Features (0Ah): Supported 00:08:37.322 Asynchronous Event Request (0Ch): Supported 00:08:37.322 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:37.322 Directive Send (19h): Supported 00:08:37.322 Directive Receive (1Ah): Supported 00:08:37.322 Virtualization Management (1Ch): Supported 00:08:37.322 Doorbell Buffer Config (7Ch): Supported 00:08:37.322 Format NVM (80h): Supported LBA-Change 00:08:37.322 I/O Commands 00:08:37.322 ------------ 00:08:37.322 Flush (00h): Supported LBA-Change 00:08:37.322 Write (01h): Supported LBA-Change 00:08:37.322 Read (02h): Supported 00:08:37.322 Compare (05h): Supported 00:08:37.322 Write Zeroes (08h): Supported LBA-Change 00:08:37.322 Dataset Management (09h): Supported LBA-Change 00:08:37.322 Unknown (0Ch): Supported 00:08:37.322 Unknown (12h): Supported 00:08:37.322 Copy (19h): Supported LBA-Change 00:08:37.322 Unknown (1Dh): Supported LBA-Change 00:08:37.322 00:08:37.322 Error Log 00:08:37.322 ========= 00:08:37.322 00:08:37.322 Arbitration 00:08:37.322 =========== 00:08:37.322 Arbitration Burst: no limit 00:08:37.322 00:08:37.322 Power Management 00:08:37.322 ================ 00:08:37.322 Number of Power States: 1 00:08:37.322 Current Power State: Power State #0 00:08:37.322 Power State #0: 00:08:37.322 Max Power: 25.00 W 00:08:37.322 Non-Operational State: Operational 00:08:37.322 Entry Latency: 16 microseconds 00:08:37.322 Exit Latency: 4 microseconds 00:08:37.322 Relative Read Throughput: 0 00:08:37.322 Relative Read Latency: 0 00:08:37.322 Relative Write Throughput: 0 00:08:37.322 Relative Write Latency: 0 00:08:37.322 Idle Power: Not Reported 00:08:37.322 Active Power: Not Reported 00:08:37.322 Non-Operational Permissive Mode: Not Supported 00:08:37.322 00:08:37.322 Health Information 00:08:37.322 ================== 00:08:37.322 Critical Warnings: 00:08:37.322 Available Spare Space: OK 00:08:37.322 Temperature: OK 00:08:37.322 Device Reliability: OK 00:08:37.322 Read Only: No 00:08:37.322 Volatile Memory Backup: OK 00:08:37.322 Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.322 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:37.322 Available Spare: 0% 00:08:37.322 Available Spare Threshold: 0% 00:08:37.322 Life Percentage Used: 0% 00:08:37.322 Data Units Read: 662 00:08:37.322 Data Units Written: 590 00:08:37.322 Host Read Commands: 35492 00:08:37.322 Host Write Commands: 35278 00:08:37.322 Controller Busy Time: 0 minutes 00:08:37.322 Power Cycles: 0 00:08:37.322 Power On Hours: 0 hours 00:08:37.322 Unsafe Shutdowns: 0 00:08:37.322 Unrecoverable Media Errors: 0 00:08:37.322 Lifetime Error Log Entries: 0 00:08:37.322 Warning Temperature Time: 0 minutes 00:08:37.322 Critical Temperature Time: 0 minutes 00:08:37.322 00:08:37.322 Number of Queues 00:08:37.322 ================ 00:08:37.322 Number of I/O Submission Queues: 64 00:08:37.322 Number of I/O Completion Queues: 64 00:08:37.322 00:08:37.322 ZNS Specific Controller Data 00:08:37.322 ============================ 00:08:37.322 Zone Append Size Limit: 0 00:08:37.322 00:08:37.322 00:08:37.322 Active Namespaces 00:08:37.322 ================= 00:08:37.322 Namespace ID:1 00:08:37.322 Error Recovery Timeout: Unlimited 00:08:37.322 Command Set Identifier: NVM (00h) 00:08:37.322 Deallocate: Supported 00:08:37.322 Deallocated/Unwritten Error: Supported 00:08:37.322 Deallocated Read Value: All 0x00 00:08:37.322 Deallocate in Write Zeroes: Not Supported 00:08:37.322 Deallocated Guard Field: 0xFFFF 00:08:37.322 Flush: Supported 00:08:37.322 Reservation: Not Supported 00:08:37.322 Metadata Transferred as: Separate Metadata Buffer 00:08:37.322 Namespace Sharing Capabilities: Private 00:08:37.322 Size (in LBAs): 1548666 (5GiB) 00:08:37.322 Capacity (in LBAs): 1548666 (5GiB) 00:08:37.322 Utilization (in LBAs): 1548666 (5GiB) 00:08:37.322 Thin Provisioning: Not Supported 00:08:37.322 Per-NS Atomic Units: No 00:08:37.322 Maximum Single Source Range Length: 128 00:08:37.322 Maximum Copy Length: 128 00:08:37.322 Maximum Source Range Count: 128 00:08:37.322 NGUID/EUI64 Never Reused: No 00:08:37.322 Namespace Write Protected: No 00:08:37.322 Number of LBA Formats: 8 00:08:37.322 Current LBA Format: LBA Format #07 00:08:37.322 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:37.322 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:37.322 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:37.322 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:37.322 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:37.322 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:37.322 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:37.322 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:37.322 00:08:37.322 NVM Specific Namespace Data 00:08:37.322 =========================== 00:08:37.322 Logical Block Storage Tag Mask: 0 00:08:37.322 Protection Information Capabilities: 00:08:37.322 16b Guard Protection Information Storage Tag Support: No 00:08:37.322 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:37.322 Storage Tag Check Read Support: No 00:08:37.322 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.322 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.322 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.322 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.322 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.322 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.322 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.322 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.322 ===================================================== 00:08:37.322 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.322 ===================================================== 00:08:37.322 Controller Capabilities/Features 00:08:37.322 ================================ 00:08:37.322 Vendor ID: 1b36 00:08:37.322 Subsystem Vendor ID: 1af4 00:08:37.322 Serial Number: 12341 00:08:37.322 Model Number: QEMU NVMe Ctrl 00:08:37.322 Firmware Version: 8.0.0 00:08:37.322 Recommended Arb Burst: 6 00:08:37.322 IEEE OUI Identifier: 00 54 52 00:08:37.322 Multi-path I/O 00:08:37.322 May have multiple subsystem ports: No 00:08:37.322 May have multiple controllers: No 00:08:37.322 Associated with SR-IOV VF: No 00:08:37.322 Max Data Transfer Size: 524288 00:08:37.322 Max Number of Namespaces: 256 00:08:37.322 Max Number of I/O Queues: 64 00:08:37.322 NVMe Specification Version (VS): 1.4 00:08:37.322 NVMe Specification Version (Identify): 1.4 00:08:37.322 Maximum Queue Entries: 2048 00:08:37.322 Contiguous Queues Required: Yes 00:08:37.322 Arbitration Mechanisms Supported 00:08:37.322 Weighted Round Robin: Not Supported 00:08:37.322 Vendor Specific: Not Supported 00:08:37.322 Reset Timeout: 7500 ms 00:08:37.322 Doorbell Stride: 4 bytes 00:08:37.322 NVM Subsystem Reset: Not Supported 00:08:37.322 Command Sets Supported 00:08:37.322 NVM Command Set: Supported 00:08:37.322 Boot Partition: Not Supported 00:08:37.322 Memory Page Size Minimum: 4096 bytes 00:08:37.322 Memory Page Size Maximum: 65536 bytes 00:08:37.322 Persistent Memory Region: Not Supported 00:08:37.322 Optional Asynchronous Events Supported 00:08:37.322 Namespace Attribute Notices: Supported 00:08:37.322 Firmware Activation Notices: Not Supported 00:08:37.322 ANA Change Notices: Not Supported 00:08:37.322 PLE Aggregate Log Change Notices: Not Supported 00:08:37.322 LBA Status Info Alert Notices: Not Supported 00:08:37.322 EGE Aggregate Log Change Notices: Not Supported 00:08:37.322 Normal NVM Subsystem Shutdown event: Not Supported 00:08:37.322 Zone Descriptor Change Notices: Not Supported 00:08:37.322 Discovery Log Change Notices: Not Supported 00:08:37.322 Controller Attributes 00:08:37.322 128-bit Host Identifier: Not Supported 00:08:37.322 Non-Operational Permissive Mode: Not Supported 00:08:37.322 NVM Sets: Not Supported 00:08:37.322 Read Recovery Levels: Not Supported 00:08:37.322 Endurance Groups: Not Supported 00:08:37.322 Predictable Latency Mode: Not Supported 00:08:37.322 Traffic Based Keep ALive: Not Supported 00:08:37.322 Namespace Granularity: Not Supported 00:08:37.322 SQ Associations: Not Supported 00:08:37.322 UUID List: Not Supported 00:08:37.322 Multi-Domain Subsystem: Not Supported 00:08:37.323 Fixed Capacity Management: Not Supported 00:08:37.323 Variable Capacity Management: Not Supported 00:08:37.323 Delete Endurance Group: Not Supported 00:08:37.323 Delete NVM Set: Not Supported 00:08:37.323 Extended LBA Formats Supported: Supported 00:08:37.323 Flexible Data Placement Supported: Not Supported 00:08:37.323 00:08:37.323 Controller Memory Buffer Support 00:08:37.323 ================================ 00:08:37.323 Supported: No 00:08:37.323 00:08:37.323 Persistent Memory Region Support 00:08:37.323 ================================ 00:08:37.323 Supported: No 00:08:37.323 00:08:37.323 Admin Command Set Attributes 00:08:37.323 ============================ 00:08:37.323 Security Send/Receive: Not Supported 00:08:37.323 Format NVM: Supported 00:08:37.323 Firmware Activate/Download: Not Supported 00:08:37.323 Namespace Management: Supported 00:08:37.323 Device Self-Test: Not Supported 00:08:37.323 Directives: Supported 00:08:37.323 NVMe-MI: Not Supported 00:08:37.323 Virtualization Management: Not Supported 00:08:37.323 Doorbell Buffer Config: Supported 00:08:37.323 Get LBA Status Capability: Not Supported 00:08:37.323 Command & Feature Lockdown Capability: Not Supported 00:08:37.323 Abort Command Limit: 4 00:08:37.323 Async Event Request Limit: 4 00:08:37.323 Number of Firmware Slots: N/A 00:08:37.323 Firmware Slot 1 Read-Only: N/A 00:08:37.323 Firmware Activation Without Reset: N/A 00:08:37.323 Multiple Update Detection Support: N/A 00:08:37.323 Firmware Update Granularity: No Information Provided 00:08:37.323 Per-Namespace SMART Log: Yes 00:08:37.323 Asymmetric Namespace Access Log Page: Not Supported 00:08:37.323 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:37.323 Command Effects Log Page: Supported 00:08:37.323 Get Log Page Extended Data: Supported 00:08:37.323 Telemetry Log Pages: Not Supported 00:08:37.323 Persistent Event Log Pages: Not Supported 00:08:37.323 Supported Log Pages Log Page: May Support 00:08:37.323 Commands Supported & Effects Log Page: Not Supported 00:08:37.323 Feature Identifiers & Effects Log Page:May Support 00:08:37.323 NVMe-MI Commands & Effects Log Page: May Support 00:08:37.323 Data Area 4 for Telemetry Log: Not Supported 00:08:37.323 Error Log Page Entries Supported: 1 00:08:37.323 Keep Alive: Not Supported 00:08:37.323 00:08:37.323 NVM Command Set Attributes 00:08:37.323 ========================== 00:08:37.323 Submission Queue Entry Size 00:08:37.323 Max: 64 00:08:37.323 Min: 64 00:08:37.323 Completion Queue Entry Size 00:08:37.323 Max: 16 00:08:37.323 Min: 16 00:08:37.323 Number of Namespaces: 256 00:08:37.323 Compare Command: Supported 00:08:37.323 Write Uncorrectable Command: Not Supported 00:08:37.323 Dataset Management Command: Supported 00:08:37.323 Write Zeroes Command: Supported 00:08:37.323 Set Features Save Field: Supported 00:08:37.323 Reservations: Not Supported 00:08:37.323 Timestamp: Supported 00:08:37.323 Copy: Supported 00:08:37.323 Volatile Write Cache: Present 00:08:37.323 Atomic Write Unit (Normal): 1 00:08:37.323 Atomic Write Unit (PFail): 1 00:08:37.323 Atomic Compare & Write Unit: 1 00:08:37.323 Fused Compare & Write: Not Supported 00:08:37.323 Scatter-Gather List 00:08:37.323 SGL Command Set: Supported 00:08:37.323 SGL Keyed: Not Supported 00:08:37.323 SGL Bit Bucket Descriptor: Not Supported 00:08:37.323 SGL Metadata Pointer: Not Supported 00:08:37.323 Oversized SGL: Not Supported 00:08:37.323 SGL Metadata Address: Not Supported 00:08:37.323 SGL Offset: Not Supported 00:08:37.323 Transport SGL Data Block: Not Supported 00:08:37.323 Replay Protected Memory Block: Not Supported 00:08:37.323 00:08:37.323 Firmware Slot Information 00:08:37.323 ========================= 00:08:37.323 Active slot: 1 00:08:37.323 Slot 1 Firmware Revision: 1.0 00:08:37.323 00:08:37.323 00:08:37.323 Commands Supported and Effects 00:08:37.323 ============================== 00:08:37.323 Admin Commands 00:08:37.323 -------------- 00:08:37.323 Delete I/O Submission Queue (00h): Supported 00:08:37.323 Create I/O Submission Queue (01h): Supported 00:08:37.323 Get Log Page (02h): Supported 00:08:37.323 Delete I/O Completion Queue (04h): Supported 00:08:37.323 Create I/O Completion Queue (05h): Supported 00:08:37.323 Identify (06h): Supported 00:08:37.323 Abort (08h): Supported 00:08:37.323 Set Features (09h): Supported 00:08:37.323 Get Features (0Ah): Supported 00:08:37.323 Asynchronous Event Request (0Ch): Supported 00:08:37.323 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:37.323 Directive Send (19h): Supported 00:08:37.323 Directive Receive (1Ah): Supported 00:08:37.323 Virtualization Management (1Ch): Supported 00:08:37.323 Doorbell Buffer Config (7Ch): Supported 00:08:37.323 Format NVM (80h): Supported LBA-Change 00:08:37.323 I/O Commands 00:08:37.323 ------------ 00:08:37.323 Flush (00h): Supported LBA-Change 00:08:37.323 Write (01h): Supported LBA-Change 00:08:37.323 Read (02h): Supported 00:08:37.323 Compare (05h): Supported 00:08:37.323 Write Zeroes (08h): Supported LBA-Change 00:08:37.323 Dataset Management (09h): Supported LBA-Change 00:08:37.323 Unknown (0Ch): Supported 00:08:37.323 Unknown (12h): Supported 00:08:37.323 Copy (19h): Supported LBA-Change 00:08:37.323 Unknown (1Dh): Supported LBA-Change 00:08:37.323 00:08:37.323 Error Log 00:08:37.323 ========= 00:08:37.323 00:08:37.323 Arbitration 00:08:37.323 =========== 00:08:37.323 Arbitration Burst: no limit 00:08:37.323 00:08:37.323 Power Management 00:08:37.323 ================ 00:08:37.323 Number of Power States: 1 00:08:37.323 Current Power State: Power State #0 00:08:37.323 Power State #0: 00:08:37.323 Max Power: 25.00 W 00:08:37.323 Non-Operational State: Operational 00:08:37.323 Entry Latency: 16 microseconds 00:08:37.323 Exit Latency: 4 microseconds 00:08:37.323 Relative Read Throughput: 0 00:08:37.323 Relative Read Latency: 0 00:08:37.323 Relative Write Throughput: 0 00:08:37.323 Relative Write Latency: 0 00:08:37.323 Idle Power: Not Reported 00:08:37.323 Active Power: Not Reported 00:08:37.323 Non-Operational Permissive Mode: Not Supported 00:08:37.323 00:08:37.323 Health Information 00:08:37.323 ================== 00:08:37.323 Critical Warnings: 00:08:37.323 Available Spare Space: OK 00:08:37.323 Temperature: OK 00:08:37.323 Device Reliability: OK 00:08:37.323 Read Only: No 00:08:37.323 Volatile Memory Backup: OK 00:08:37.323 Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.323 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:37.323 Available Spare: 0% 00:08:37.323 Available Spare Threshold: 0% 00:08:37.323 Life Percentage Used: 0% 00:08:37.323 Data Units Read: 1038 00:08:37.323 Data Units Written: 904 00:08:37.323 Host Read Commands: 52562 00:08:37.323 Host Write Commands: 51356 00:08:37.323 Controller Busy Time: 0 minutes 00:08:37.323 Power Cycles: 0 00:08:37.323 Power On Hours: 0 hours 00:08:37.323 Unsafe Shutdowns: 0 00:08:37.323 Unrecoverable Media Errors: 0 00:08:37.323 Lifetime Error Log Entries: 0 00:08:37.323 Warning Temperature Time: 0 minutes 00:08:37.323 Critical Temperature Time: 0 minutes 00:08:37.323 00:08:37.323 Number of Queues 00:08:37.323 ================ 00:08:37.323 Number of I/O Submission Queues: 64 00:08:37.323 Number of I/O Completion Queues: 64 00:08:37.323 00:08:37.323 ZNS Specific Controller Data 00:08:37.323 ============================ 00:08:37.323 Zone Append Size Limit: 0 00:08:37.323 00:08:37.323 00:08:37.323 Active Namespaces 00:08:37.323 ================= 00:08:37.323 Namespace ID:1 00:08:37.323 Error Recovery Timeout: Unlimited 00:08:37.323 Command Set Identifier: NVM (00h) 00:08:37.323 Deallocate: Supported 00:08:37.323 Deallocated/Unwritten Error: Supported 00:08:37.323 Deallocated Read Value: All 0x00 00:08:37.323 Deallocate in Write Zeroes: Not Supported 00:08:37.323 Deallocated Guard Field: 0xFFFF 00:08:37.323 Flush: Supported 00:08:37.323 Reservation: Not Supported 00:08:37.323 Namespace Sharing Capabilities: Private 00:08:37.323 Size (in LBAs): 1310720 (5GiB) 00:08:37.323 Capacity (in LBAs): 1310720 (5GiB) 00:08:37.323 Utilization (in LBAs): 1310720 (5GiB) 00:08:37.323 Thin Provisioning: Not Supported 00:08:37.323 Per-NS Atomic Units: No 00:08:37.323 Maximum Single Source Range Length: 128 00:08:37.323 Maximum Copy Length: 128 00:08:37.323 Maximum Source Range Count: 128 00:08:37.323 NGUID/EUI64 Never Reused: No 00:08:37.323 Namespace Write Protected: No 00:08:37.323 Number of LBA Formats: 8 00:08:37.323 Current LBA Format: LBA Format #04 00:08:37.323 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:37.323 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:37.323 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:37.323 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:37.323 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:37.323 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:37.323 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:37.324 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:37.324 00:08:37.324 NVM Specific Namespace Data 00:08:37.324 =========================== 00:08:37.324 Logical Block Storage Tag Mask: 0 00:08:37.324 Protection Information Capabilities: 00:08:37.324 16b Guard Protection Information Storage Tag Support: No 00:08:37.324 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:37.324 Storage Tag Check Read Support: No 00:08:37.324 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.324 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.324 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.324 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.324 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.324 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.324 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.324 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.324 ===================================================== 00:08:37.324 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:37.324 ===================================================== 00:08:37.324 Controller Capabilities/Features 00:08:37.324 ================================ 00:08:37.324 Vendor ID: 1b36 00:08:37.324 Subsystem Vendor ID: 1af4 00:08:37.324 Serial Number: 12343 00:08:37.324 Model Number: QEMU NVMe Ctrl 00:08:37.324 Firmware Version: 8.0.0 00:08:37.324 Recommended Arb Burst: 6 00:08:37.324 IEEE OUI Identifier: 00 54 52 00:08:37.324 Multi-path I/O 00:08:37.324 May have multiple subsystem ports: No 00:08:37.324 May have multiple controllers: Yes 00:08:37.324 Associated with SR-IOV VF: No 00:08:37.324 Max Data Transfer Size: 524288 00:08:37.324 Max Number of Namespaces: 256 00:08:37.324 Max Number of I/O Queues: 64 00:08:37.324 NVMe Specification Version (VS): 1.4 00:08:37.324 NVMe Specification Version (Identify): 1.4 00:08:37.324 Maximum Queue Entries: 2048 00:08:37.324 Contiguous Queues Required: Yes 00:08:37.324 Arbitration Mechanisms Supported 00:08:37.324 Weighted Round Robin: Not Supported 00:08:37.324 Vendor Specific: Not Supported 00:08:37.324 Reset Timeout: 7500 ms 00:08:37.324 Doorbell Stride: 4 bytes 00:08:37.324 NVM Subsystem Reset: Not Supported 00:08:37.324 Command Sets Supported 00:08:37.324 NVM Command Set: Supported 00:08:37.324 Boot Partition: Not Supported 00:08:37.324 Memory Page Size Minimum: 4096 bytes 00:08:37.324 Memory Page Size Maximum: 65536 bytes 00:08:37.324 Persistent Memory Region: Not Supported 00:08:37.324 Optional Asynchronous Events Supported 00:08:37.324 Namespace Attribute Notices: Supported 00:08:37.324 Firmware Activation Notices: Not Supported 00:08:37.324 ANA Change Notices: Not Supported 00:08:37.324 PLE Aggregate Log Change Notices: Not Supported 00:08:37.324 LBA Status Info Alert Notices: Not Supported 00:08:37.324 EGE Aggregate Log Change Notices: Not Supported 00:08:37.324 Normal NVM Subsystem Shutdown event: Not Supported 00:08:37.324 Zone Descriptor Change Notices: Not Supported 00:08:37.324 Discovery Log Change Notices: Not Supported 00:08:37.324 Controller Attributes 00:08:37.324 128-bit Host Identifier: Not Supported 00:08:37.324 Non-Operational Permissive Mode: Not Supported 00:08:37.324 NVM Sets: Not Supported 00:08:37.324 Read Recovery Levels: Not Supported 00:08:37.324 Endurance Groups: Supported 00:08:37.324 Predictable Latency Mode: Not Supported 00:08:37.324 Traffic Based Keep ALive: Not Supported 00:08:37.324 Namespace Granularity: Not Supported 00:08:37.324 SQ Associations: Not Supported 00:08:37.324 UUID List: Not Supported 00:08:37.324 Multi-Domain Subsystem: Not Supported 00:08:37.324 Fixed Capacity Management: Not Supported 00:08:37.324 Variable Capacity Management: Not Supported 00:08:37.324 Delete Endurance Group: Not Supported 00:08:37.324 Delete NVM Set: Not Supported 00:08:37.324 Extended LBA Formats Supported: Supported 00:08:37.324 Flexible Data Placement Supported: Supported 00:08:37.324 00:08:37.324 Controller Memory Buffer Support 00:08:37.324 ================================ 00:08:37.324 Supported: No 00:08:37.324 00:08:37.324 Persistent Memory Region Support 00:08:37.324 ================================ 00:08:37.324 Supported: No 00:08:37.324 00:08:37.324 Admin Command Set Attributes 00:08:37.324 ============================ 00:08:37.324 Security Send/Receive: Not Supported 00:08:37.324 Format NVM: Supported 00:08:37.324 Firmware Activate/Download: Not Supported 00:08:37.324 Namespace Management: Supported 00:08:37.324 Device Self-Test: Not Supported 00:08:37.324 Directives: Supported 00:08:37.324 NVMe-MI: Not Supported 00:08:37.324 Virtualization Management: Not Supported 00:08:37.324 Doorbell Buffer Config: Supported 00:08:37.324 Get LBA Status Capability: Not Supported 00:08:37.324 Command & Feature Lockdown Capability: Not Supported 00:08:37.324 Abort Command Limit: 4 00:08:37.324 Async Event Request Limit: 4 00:08:37.324 Number of Firmware Slots: N/A 00:08:37.324 Firmware Slot 1 Read-Only: N/A 00:08:37.324 Firmware Activation Without Reset: N/A 00:08:37.324 Multiple Update Detection Support: N/A 00:08:37.324 Firmware Update Granularity: No Information Provided 00:08:37.324 Per-Namespace SMART Log: Yes 00:08:37.324 Asymmetric Namespace Access Log Page: Not Supported 00:08:37.324 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:37.324 Command Effects Log Page: Supported 00:08:37.324 Get Log Page Extended Data: Supported 00:08:37.324 Telemetry Log Pages: Not Supported 00:08:37.324 Persistent Event Log Pages: Not Supported 00:08:37.324 Supported Log Pages Log Page: May Support 00:08:37.324 Commands Supported & Effects Log Page: Not Supported 00:08:37.324 Feature Identifiers & Effects Log Page:May Support 00:08:37.324 NVMe-MI Commands & Effects Log Page: May Support 00:08:37.324 Data Area 4 for Telemetry Log: Not Supported 00:08:37.324 Error Log Page Entries Supported: 1 00:08:37.324 Keep Alive: Not Supported 00:08:37.324 00:08:37.324 NVM Command Set Attributes 00:08:37.324 ========================== 00:08:37.324 Submission Queue Entry Size 00:08:37.324 Max: 64 00:08:37.324 Min: 64 00:08:37.324 Completion Queue Entry Size 00:08:37.324 Max: 16 00:08:37.324 Min: 16 00:08:37.324 Number of Namespaces: 256 00:08:37.324 Compare Command: Supported 00:08:37.324 Write Uncorrectable Command: Not Supported 00:08:37.324 Dataset Management Command: Supported 00:08:37.324 Write Zeroes Command: Supported 00:08:37.324 Set Features Save Field: Supported 00:08:37.324 Reservations: Not Supported 00:08:37.324 Timestamp: Supported 00:08:37.324 Copy: Supported 00:08:37.324 Volatile Write Cache: Present 00:08:37.324 Atomic Write Unit (Normal): 1 00:08:37.324 Atomic Write Unit (PFail): 1 00:08:37.324 Atomic Compare & Write Unit: 1 00:08:37.324 Fused Compare & Write: Not Supported 00:08:37.324 Scatter-Gather List 00:08:37.324 SGL Command Set: Supported 00:08:37.324 SGL Keyed: Not Supported 00:08:37.324 SGL Bit Bucket Descriptor: Not Supported 00:08:37.324 SGL Metadata Pointer: Not Supported 00:08:37.324 Oversized SGL: Not Supported 00:08:37.324 SGL Metadata Address: Not Supported 00:08:37.324 SGL Offset: Not Supported 00:08:37.324 Transport SGL Data Block: Not Supported 00:08:37.324 Replay Protected Memory Block: Not Supported 00:08:37.324 00:08:37.324 Firmware Slot Information 00:08:37.324 ========================= 00:08:37.324 Active slot: 1 00:08:37.324 Slot 1 Firmware Revision: 1.0 00:08:37.324 00:08:37.324 00:08:37.324 Commands Supported and Effects 00:08:37.324 ============================== 00:08:37.324 Admin Commands 00:08:37.324 -------------- 00:08:37.324 Delete I/O Submission Queue (00h): Supported 00:08:37.324 Create I/O Submission Queue (01h): Supported 00:08:37.324 Get Log Page (02h): Supported 00:08:37.324 Delete I/O Completion Queue (04h): Supported 00:08:37.324 Create I/O Completion Queue (05h): Supported 00:08:37.324 Identify (06h): Supported 00:08:37.324 Abort (08h): Supported 00:08:37.324 Set Features (09h): Supported 00:08:37.324 Get Features (0Ah): Supported 00:08:37.324 Asynchronous Event Request (0Ch): Supported 00:08:37.324 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:37.324 Directive Send (19h): Supported 00:08:37.324 Directive Receive (1Ah): Supported 00:08:37.324 Virtualization Management (1Ch): Supported 00:08:37.324 Doorbell Buffer Config (7Ch): Supported 00:08:37.324 Format NVM (80h): Supported LBA-Change 00:08:37.324 I/O Commands 00:08:37.324 ------------ 00:08:37.324 Flush (00h): Supported LBA-Change 00:08:37.324 Write (01h): Supported LBA-Change 00:08:37.324 Read (02h): Supported 00:08:37.324 Compare (05h): Supported 00:08:37.324 Write Zeroes (08h): Supported LBA-Change 00:08:37.324 Dataset Management (09h): Supported LBA-Change 00:08:37.324 Unknown (0Ch): Supported 00:08:37.324 Unknown (12h): Supported 00:08:37.324 Copy (19h): Supported LBA-Change 00:08:37.324 Unknown (1Dh): Supported LBA-Change 00:08:37.324 00:08:37.324 Error Log 00:08:37.324 ========= 00:08:37.324 00:08:37.325 Arbitration 00:08:37.325 =========== 00:08:37.325 Arbitration Burst: no limit 00:08:37.325 00:08:37.325 Power Management 00:08:37.325 ================ 00:08:37.325 Number of Power States: 1 00:08:37.325 Current Power State: Power State #0 00:08:37.325 Power State #0: 00:08:37.325 Max Power: 25.00 W 00:08:37.325 Non-Operational State: Operational 00:08:37.325 Entry Latency: 16 microseconds 00:08:37.325 Exit Latency: 4 microseconds 00:08:37.325 Relative Read Throughput: 0 00:08:37.325 Relative Read Latency: 0 00:08:37.325 Relative Write Throughput: 0 00:08:37.325 Relative Write Latency: 0 00:08:37.325 Idle Power: Not Reported 00:08:37.325 Active Power: Not Reported 00:08:37.325 Non-Operational Permissive Mode: Not Supported 00:08:37.325 00:08:37.325 Health Information 00:08:37.325 ================== 00:08:37.325 Critical Warnings: 00:08:37.325 Available Spare Space: OK 00:08:37.325 Temperature: OK 00:08:37.325 Device Reliability: OK 00:08:37.325 Read Only: No 00:08:37.325 Volatile Memory Backup: OK 00:08:37.325 Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.325 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:37.325 Available Spare: 0% 00:08:37.325 Available Spare Threshold: 0% 00:08:37.325 Life Percentage Used: [2024-11-05 17:42:57.193125] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 75840 terminated unexpected 00:08:37.325 [2024-11-05 17:42:57.193937] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 75840 terminated unexpected 00:08:37.325 [2024-11-05 17:42:57.194466] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 75840 terminated unexpected 00:08:37.325 [2024-11-05 17:42:57.195385] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 75840 terminated unexpected 00:08:37.325 0% 00:08:37.325 Data Units Read: 721 00:08:37.325 Data Units Written: 650 00:08:37.325 Host Read Commands: 36308 00:08:37.325 Host Write Commands: 35731 00:08:37.325 Controller Busy Time: 0 minutes 00:08:37.325 Power Cycles: 0 00:08:37.325 Power On Hours: 0 hours 00:08:37.325 Unsafe Shutdowns: 0 00:08:37.325 Unrecoverable Media Errors: 0 00:08:37.325 Lifetime Error Log Entries: 0 00:08:37.325 Warning Temperature Time: 0 minutes 00:08:37.325 Critical Temperature Time: 0 minutes 00:08:37.325 00:08:37.325 Number of Queues 00:08:37.325 ================ 00:08:37.325 Number of I/O Submission Queues: 64 00:08:37.325 Number of I/O Completion Queues: 64 00:08:37.325 00:08:37.325 ZNS Specific Controller Data 00:08:37.325 ============================ 00:08:37.325 Zone Append Size Limit: 0 00:08:37.325 00:08:37.325 00:08:37.325 Active Namespaces 00:08:37.325 ================= 00:08:37.325 Namespace ID:1 00:08:37.325 Error Recovery Timeout: Unlimited 00:08:37.325 Command Set Identifier: NVM (00h) 00:08:37.325 Deallocate: Supported 00:08:37.325 Deallocated/Unwritten Error: Supported 00:08:37.325 Deallocated Read Value: All 0x00 00:08:37.325 Deallocate in Write Zeroes: Not Supported 00:08:37.325 Deallocated Guard Field: 0xFFFF 00:08:37.325 Flush: Supported 00:08:37.325 Reservation: Not Supported 00:08:37.325 Namespace Sharing Capabilities: Multiple Controllers 00:08:37.325 Size (in LBAs): 262144 (1GiB) 00:08:37.325 Capacity (in LBAs): 262144 (1GiB) 00:08:37.325 Utilization (in LBAs): 262144 (1GiB) 00:08:37.325 Thin Provisioning: Not Supported 00:08:37.325 Per-NS Atomic Units: No 00:08:37.325 Maximum Single Source Range Length: 128 00:08:37.325 Maximum Copy Length: 128 00:08:37.325 Maximum Source Range Count: 128 00:08:37.325 NGUID/EUI64 Never Reused: No 00:08:37.325 Namespace Write Protected: No 00:08:37.325 Endurance group ID: 1 00:08:37.325 Number of LBA Formats: 8 00:08:37.325 Current LBA Format: LBA Format #04 00:08:37.325 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:37.325 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:37.325 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:37.325 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:37.325 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:37.325 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:37.325 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:37.325 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:37.325 00:08:37.325 Get Feature FDP: 00:08:37.325 ================ 00:08:37.325 Enabled: Yes 00:08:37.325 FDP configuration index: 0 00:08:37.325 00:08:37.325 FDP configurations log page 00:08:37.325 =========================== 00:08:37.325 Number of FDP configurations: 1 00:08:37.325 Version: 0 00:08:37.325 Size: 112 00:08:37.325 FDP Configuration Descriptor: 0 00:08:37.325 Descriptor Size: 96 00:08:37.325 Reclaim Group Identifier format: 2 00:08:37.325 FDP Volatile Write Cache: Not Present 00:08:37.325 FDP Configuration: Valid 00:08:37.325 Vendor Specific Size: 0 00:08:37.325 Number of Reclaim Groups: 2 00:08:37.325 Number of Recalim Unit Handles: 8 00:08:37.325 Max Placement Identifiers: 128 00:08:37.325 Number of Namespaces Suppprted: 256 00:08:37.325 Reclaim unit Nominal Size: 6000000 bytes 00:08:37.325 Estimated Reclaim Unit Time Limit: Not Reported 00:08:37.325 RUH Desc #000: RUH Type: Initially Isolated 00:08:37.325 RUH Desc #001: RUH Type: Initially Isolated 00:08:37.325 RUH Desc #002: RUH Type: Initially Isolated 00:08:37.325 RUH Desc #003: RUH Type: Initially Isolated 00:08:37.325 RUH Desc #004: RUH Type: Initially Isolated 00:08:37.325 RUH Desc #005: RUH Type: Initially Isolated 00:08:37.325 RUH Desc #006: RUH Type: Initially Isolated 00:08:37.325 RUH Desc #007: RUH Type: Initially Isolated 00:08:37.325 00:08:37.325 FDP reclaim unit handle usage log page 00:08:37.325 ====================================== 00:08:37.325 Number of Reclaim Unit Handles: 8 00:08:37.325 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:37.325 RUH Usage Desc #001: RUH Attributes: Unused 00:08:37.325 RUH Usage Desc #002: RUH Attributes: Unused 00:08:37.325 RUH Usage Desc #003: RUH Attributes: Unused 00:08:37.325 RUH Usage Desc #004: RUH Attributes: Unused 00:08:37.325 RUH Usage Desc #005: RUH Attributes: Unused 00:08:37.325 RUH Usage Desc #006: RUH Attributes: Unused 00:08:37.325 RUH Usage Desc #007: RUH Attributes: Unused 00:08:37.325 00:08:37.325 FDP statistics log page 00:08:37.325 ======================= 00:08:37.325 Host bytes with metadata written: 421044224 00:08:37.325 Media bytes with metadata written: 421089280 00:08:37.325 Media bytes erased: 0 00:08:37.325 00:08:37.325 FDP events log page 00:08:37.325 =================== 00:08:37.325 Number of FDP events: 0 00:08:37.325 00:08:37.325 NVM Specific Namespace Data 00:08:37.325 =========================== 00:08:37.325 Logical Block Storage Tag Mask: 0 00:08:37.325 Protection Information Capabilities: 00:08:37.325 16b Guard Protection Information Storage Tag Support: No 00:08:37.325 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:37.325 Storage Tag Check Read Support: No 00:08:37.325 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.325 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.325 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.325 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.325 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.325 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.325 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.325 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.325 ===================================================== 00:08:37.325 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.325 ===================================================== 00:08:37.325 Controller Capabilities/Features 00:08:37.325 ================================ 00:08:37.325 Vendor ID: 1b36 00:08:37.325 Subsystem Vendor ID: 1af4 00:08:37.325 Serial Number: 12342 00:08:37.325 Model Number: QEMU NVMe Ctrl 00:08:37.325 Firmware Version: 8.0.0 00:08:37.325 Recommended Arb Burst: 6 00:08:37.325 IEEE OUI Identifier: 00 54 52 00:08:37.325 Multi-path I/O 00:08:37.325 May have multiple subsystem ports: No 00:08:37.325 May have multiple controllers: No 00:08:37.325 Associated with SR-IOV VF: No 00:08:37.325 Max Data Transfer Size: 524288 00:08:37.325 Max Number of Namespaces: 256 00:08:37.325 Max Number of I/O Queues: 64 00:08:37.325 NVMe Specification Version (VS): 1.4 00:08:37.325 NVMe Specification Version (Identify): 1.4 00:08:37.325 Maximum Queue Entries: 2048 00:08:37.325 Contiguous Queues Required: Yes 00:08:37.325 Arbitration Mechanisms Supported 00:08:37.325 Weighted Round Robin: Not Supported 00:08:37.325 Vendor Specific: Not Supported 00:08:37.325 Reset Timeout: 7500 ms 00:08:37.325 Doorbell Stride: 4 bytes 00:08:37.326 NVM Subsystem Reset: Not Supported 00:08:37.326 Command Sets Supported 00:08:37.326 NVM Command Set: Supported 00:08:37.326 Boot Partition: Not Supported 00:08:37.326 Memory Page Size Minimum: 4096 bytes 00:08:37.326 Memory Page Size Maximum: 65536 bytes 00:08:37.326 Persistent Memory Region: Not Supported 00:08:37.326 Optional Asynchronous Events Supported 00:08:37.326 Namespace Attribute Notices: Supported 00:08:37.326 Firmware Activation Notices: Not Supported 00:08:37.326 ANA Change Notices: Not Supported 00:08:37.326 PLE Aggregate Log Change Notices: Not Supported 00:08:37.326 LBA Status Info Alert Notices: Not Supported 00:08:37.326 EGE Aggregate Log Change Notices: Not Supported 00:08:37.326 Normal NVM Subsystem Shutdown event: Not Supported 00:08:37.326 Zone Descriptor Change Notices: Not Supported 00:08:37.326 Discovery Log Change Notices: Not Supported 00:08:37.326 Controller Attributes 00:08:37.326 128-bit Host Identifier: Not Supported 00:08:37.326 Non-Operational Permissive Mode: Not Supported 00:08:37.326 NVM Sets: Not Supported 00:08:37.326 Read Recovery Levels: Not Supported 00:08:37.326 Endurance Groups: Not Supported 00:08:37.326 Predictable Latency Mode: Not Supported 00:08:37.326 Traffic Based Keep ALive: Not Supported 00:08:37.326 Namespace Granularity: Not Supported 00:08:37.326 SQ Associations: Not Supported 00:08:37.326 UUID List: Not Supported 00:08:37.326 Multi-Domain Subsystem: Not Supported 00:08:37.326 Fixed Capacity Management: Not Supported 00:08:37.326 Variable Capacity Management: Not Supported 00:08:37.326 Delete Endurance Group: Not Supported 00:08:37.326 Delete NVM Set: Not Supported 00:08:37.326 Extended LBA Formats Supported: Supported 00:08:37.326 Flexible Data Placement Supported: Not Supported 00:08:37.326 00:08:37.326 Controller Memory Buffer Support 00:08:37.326 ================================ 00:08:37.326 Supported: No 00:08:37.326 00:08:37.326 Persistent Memory Region Support 00:08:37.326 ================================ 00:08:37.326 Supported: No 00:08:37.326 00:08:37.326 Admin Command Set Attributes 00:08:37.326 ============================ 00:08:37.326 Security Send/Receive: Not Supported 00:08:37.326 Format NVM: Supported 00:08:37.326 Firmware Activate/Download: Not Supported 00:08:37.326 Namespace Management: Supported 00:08:37.326 Device Self-Test: Not Supported 00:08:37.326 Directives: Supported 00:08:37.326 NVMe-MI: Not Supported 00:08:37.326 Virtualization Management: Not Supported 00:08:37.326 Doorbell Buffer Config: Supported 00:08:37.326 Get LBA Status Capability: Not Supported 00:08:37.326 Command & Feature Lockdown Capability: Not Supported 00:08:37.326 Abort Command Limit: 4 00:08:37.326 Async Event Request Limit: 4 00:08:37.326 Number of Firmware Slots: N/A 00:08:37.326 Firmware Slot 1 Read-Only: N/A 00:08:37.326 Firmware Activation Without Reset: N/A 00:08:37.326 Multiple Update Detection Support: N/A 00:08:37.326 Firmware Update Granularity: No Information Provided 00:08:37.326 Per-Namespace SMART Log: Yes 00:08:37.326 Asymmetric Namespace Access Log Page: Not Supported 00:08:37.326 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:37.326 Command Effects Log Page: Supported 00:08:37.326 Get Log Page Extended Data: Supported 00:08:37.326 Telemetry Log Pages: Not Supported 00:08:37.326 Persistent Event Log Pages: Not Supported 00:08:37.326 Supported Log Pages Log Page: May Support 00:08:37.326 Commands Supported & Effects Log Page: Not Supported 00:08:37.326 Feature Identifiers & Effects Log Page:May Support 00:08:37.326 NVMe-MI Commands & Effects Log Page: May Support 00:08:37.326 Data Area 4 for Telemetry Log: Not Supported 00:08:37.326 Error Log Page Entries Supported: 1 00:08:37.326 Keep Alive: Not Supported 00:08:37.326 00:08:37.326 NVM Command Set Attributes 00:08:37.326 ========================== 00:08:37.326 Submission Queue Entry Size 00:08:37.326 Max: 64 00:08:37.326 Min: 64 00:08:37.326 Completion Queue Entry Size 00:08:37.326 Max: 16 00:08:37.326 Min: 16 00:08:37.326 Number of Namespaces: 256 00:08:37.326 Compare Command: Supported 00:08:37.326 Write Uncorrectable Command: Not Supported 00:08:37.326 Dataset Management Command: Supported 00:08:37.326 Write Zeroes Command: Supported 00:08:37.326 Set Features Save Field: Supported 00:08:37.326 Reservations: Not Supported 00:08:37.326 Timestamp: Supported 00:08:37.326 Copy: Supported 00:08:37.326 Volatile Write Cache: Present 00:08:37.326 Atomic Write Unit (Normal): 1 00:08:37.326 Atomic Write Unit (PFail): 1 00:08:37.326 Atomic Compare & Write Unit: 1 00:08:37.326 Fused Compare & Write: Not Supported 00:08:37.326 Scatter-Gather List 00:08:37.326 SGL Command Set: Supported 00:08:37.326 SGL Keyed: Not Supported 00:08:37.326 SGL Bit Bucket Descriptor: Not Supported 00:08:37.326 SGL Metadata Pointer: Not Supported 00:08:37.326 Oversized SGL: Not Supported 00:08:37.326 SGL Metadata Address: Not Supported 00:08:37.326 SGL Offset: Not Supported 00:08:37.326 Transport SGL Data Block: Not Supported 00:08:37.326 Replay Protected Memory Block: Not Supported 00:08:37.326 00:08:37.326 Firmware Slot Information 00:08:37.326 ========================= 00:08:37.326 Active slot: 1 00:08:37.326 Slot 1 Firmware Revision: 1.0 00:08:37.326 00:08:37.326 00:08:37.326 Commands Supported and Effects 00:08:37.326 ============================== 00:08:37.326 Admin Commands 00:08:37.326 -------------- 00:08:37.326 Delete I/O Submission Queue (00h): Supported 00:08:37.326 Create I/O Submission Queue (01h): Supported 00:08:37.326 Get Log Page (02h): Supported 00:08:37.326 Delete I/O Completion Queue (04h): Supported 00:08:37.326 Create I/O Completion Queue (05h): Supported 00:08:37.326 Identify (06h): Supported 00:08:37.326 Abort (08h): Supported 00:08:37.326 Set Features (09h): Supported 00:08:37.326 Get Features (0Ah): Supported 00:08:37.326 Asynchronous Event Request (0Ch): Supported 00:08:37.326 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:37.326 Directive Send (19h): Supported 00:08:37.326 Directive Receive (1Ah): Supported 00:08:37.326 Virtualization Management (1Ch): Supported 00:08:37.326 Doorbell Buffer Config (7Ch): Supported 00:08:37.326 Format NVM (80h): Supported LBA-Change 00:08:37.326 I/O Commands 00:08:37.326 ------------ 00:08:37.326 Flush (00h): Supported LBA-Change 00:08:37.326 Write (01h): Supported LBA-Change 00:08:37.326 Read (02h): Supported 00:08:37.326 Compare (05h): Supported 00:08:37.326 Write Zeroes (08h): Supported LBA-Change 00:08:37.326 Dataset Management (09h): Supported LBA-Change 00:08:37.326 Unknown (0Ch): Supported 00:08:37.326 Unknown (12h): Supported 00:08:37.326 Copy (19h): Supported LBA-Change 00:08:37.326 Unknown (1Dh): Supported LBA-Change 00:08:37.326 00:08:37.326 Error Log 00:08:37.326 ========= 00:08:37.326 00:08:37.326 Arbitration 00:08:37.326 =========== 00:08:37.326 Arbitration Burst: no limit 00:08:37.326 00:08:37.326 Power Management 00:08:37.326 ================ 00:08:37.326 Number of Power States: 1 00:08:37.326 Current Power State: Power State #0 00:08:37.326 Power State #0: 00:08:37.326 Max Power: 25.00 W 00:08:37.326 Non-Operational State: Operational 00:08:37.326 Entry Latency: 16 microseconds 00:08:37.326 Exit Latency: 4 microseconds 00:08:37.326 Relative Read Throughput: 0 00:08:37.326 Relative Read Latency: 0 00:08:37.326 Relative Write Throughput: 0 00:08:37.326 Relative Write Latency: 0 00:08:37.326 Idle Power: Not Reported 00:08:37.326 Active Power: Not Reported 00:08:37.326 Non-Operational Permissive Mode: Not Supported 00:08:37.326 00:08:37.326 Health Information 00:08:37.326 ================== 00:08:37.326 Critical Warnings: 00:08:37.326 Available Spare Space: OK 00:08:37.326 Temperature: OK 00:08:37.326 Device Reliability: OK 00:08:37.326 Read Only: No 00:08:37.326 Volatile Memory Backup: OK 00:08:37.326 Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.327 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:37.327 Available Spare: 0% 00:08:37.327 Available Spare Threshold: 0% 00:08:37.327 Life Percentage Used: 0% 00:08:37.327 Data Units Read: 1992 00:08:37.327 Data Units Written: 1779 00:08:37.327 Host Read Commands: 107359 00:08:37.327 Host Write Commands: 105628 00:08:37.327 Controller Busy Time: 0 minutes 00:08:37.327 Power Cycles: 0 00:08:37.327 Power On Hours: 0 hours 00:08:37.327 Unsafe Shutdowns: 0 00:08:37.327 Unrecoverable Media Errors: 0 00:08:37.327 Lifetime Error Log Entries: 0 00:08:37.327 Warning Temperature Time: 0 minutes 00:08:37.327 Critical Temperature Time: 0 minutes 00:08:37.327 00:08:37.327 Number of Queues 00:08:37.327 ================ 00:08:37.327 Number of I/O Submission Queues: 64 00:08:37.327 Number of I/O Completion Queues: 64 00:08:37.327 00:08:37.327 ZNS Specific Controller Data 00:08:37.327 ============================ 00:08:37.327 Zone Append Size Limit: 0 00:08:37.327 00:08:37.327 00:08:37.327 Active Namespaces 00:08:37.327 ================= 00:08:37.327 Namespace ID:1 00:08:37.327 Error Recovery Timeout: Unlimited 00:08:37.327 Command Set Identifier: NVM (00h) 00:08:37.327 Deallocate: Supported 00:08:37.327 Deallocated/Unwritten Error: Supported 00:08:37.327 Deallocated Read Value: All 0x00 00:08:37.327 Deallocate in Write Zeroes: Not Supported 00:08:37.327 Deallocated Guard Field: 0xFFFF 00:08:37.327 Flush: Supported 00:08:37.327 Reservation: Not Supported 00:08:37.327 Namespace Sharing Capabilities: Private 00:08:37.327 Size (in LBAs): 1048576 (4GiB) 00:08:37.327 Capacity (in LBAs): 1048576 (4GiB) 00:08:37.327 Utilization (in LBAs): 1048576 (4GiB) 00:08:37.327 Thin Provisioning: Not Supported 00:08:37.327 Per-NS Atomic Units: No 00:08:37.327 Maximum Single Source Range Length: 128 00:08:37.327 Maximum Copy Length: 128 00:08:37.327 Maximum Source Range Count: 128 00:08:37.327 NGUID/EUI64 Never Reused: No 00:08:37.327 Namespace Write Protected: No 00:08:37.327 Number of LBA Formats: 8 00:08:37.327 Current LBA Format: LBA Format #04 00:08:37.327 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:37.327 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:37.327 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:37.327 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:37.327 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:37.327 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:37.327 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:37.327 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:37.327 00:08:37.327 NVM Specific Namespace Data 00:08:37.327 =========================== 00:08:37.327 Logical Block Storage Tag Mask: 0 00:08:37.327 Protection Information Capabilities: 00:08:37.327 16b Guard Protection Information Storage Tag Support: No 00:08:37.327 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:37.327 Storage Tag Check Read Support: No 00:08:37.327 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Namespace ID:2 00:08:37.327 Error Recovery Timeout: Unlimited 00:08:37.327 Command Set Identifier: NVM (00h) 00:08:37.327 Deallocate: Supported 00:08:37.327 Deallocated/Unwritten Error: Supported 00:08:37.327 Deallocated Read Value: All 0x00 00:08:37.327 Deallocate in Write Zeroes: Not Supported 00:08:37.327 Deallocated Guard Field: 0xFFFF 00:08:37.327 Flush: Supported 00:08:37.327 Reservation: Not Supported 00:08:37.327 Namespace Sharing Capabilities: Private 00:08:37.327 Size (in LBAs): 1048576 (4GiB) 00:08:37.327 Capacity (in LBAs): 1048576 (4GiB) 00:08:37.327 Utilization (in LBAs): 1048576 (4GiB) 00:08:37.327 Thin Provisioning: Not Supported 00:08:37.327 Per-NS Atomic Units: No 00:08:37.327 Maximum Single Source Range Length: 128 00:08:37.327 Maximum Copy Length: 128 00:08:37.327 Maximum Source Range Count: 128 00:08:37.327 NGUID/EUI64 Never Reused: No 00:08:37.327 Namespace Write Protected: No 00:08:37.327 Number of LBA Formats: 8 00:08:37.327 Current LBA Format: LBA Format #04 00:08:37.327 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:37.327 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:37.327 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:37.327 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:37.327 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:37.327 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:37.327 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:37.327 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:37.327 00:08:37.327 NVM Specific Namespace Data 00:08:37.327 =========================== 00:08:37.327 Logical Block Storage Tag Mask: 0 00:08:37.327 Protection Information Capabilities: 00:08:37.327 16b Guard Protection Information Storage Tag Support: No 00:08:37.327 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:37.327 Storage Tag Check Read Support: No 00:08:37.327 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Namespace ID:3 00:08:37.327 Error Recovery Timeout: Unlimited 00:08:37.327 Command Set Identifier: NVM (00h) 00:08:37.327 Deallocate: Supported 00:08:37.327 Deallocated/Unwritten Error: Supported 00:08:37.327 Deallocated Read Value: All 0x00 00:08:37.327 Deallocate in Write Zeroes: Not Supported 00:08:37.327 Deallocated Guard Field: 0xFFFF 00:08:37.327 Flush: Supported 00:08:37.327 Reservation: Not Supported 00:08:37.327 Namespace Sharing Capabilities: Private 00:08:37.327 Size (in LBAs): 1048576 (4GiB) 00:08:37.327 Capacity (in LBAs): 1048576 (4GiB) 00:08:37.327 Utilization (in LBAs): 1048576 (4GiB) 00:08:37.327 Thin Provisioning: Not Supported 00:08:37.327 Per-NS Atomic Units: No 00:08:37.327 Maximum Single Source Range Length: 128 00:08:37.327 Maximum Copy Length: 128 00:08:37.327 Maximum Source Range Count: 128 00:08:37.327 NGUID/EUI64 Never Reused: No 00:08:37.327 Namespace Write Protected: No 00:08:37.327 Number of LBA Formats: 8 00:08:37.327 Current LBA Format: LBA Format #04 00:08:37.327 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:37.327 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:37.327 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:37.327 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:37.327 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:37.327 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:37.327 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:37.327 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:37.327 00:08:37.327 NVM Specific Namespace Data 00:08:37.327 =========================== 00:08:37.327 Logical Block Storage Tag Mask: 0 00:08:37.327 Protection Information Capabilities: 00:08:37.327 16b Guard Protection Information Storage Tag Support: No 00:08:37.327 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:37.327 Storage Tag Check Read Support: No 00:08:37.327 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.327 17:42:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:37.327 17:42:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:37.589 ===================================================== 00:08:37.589 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.589 ===================================================== 00:08:37.589 Controller Capabilities/Features 00:08:37.589 ================================ 00:08:37.589 Vendor ID: 1b36 00:08:37.589 Subsystem Vendor ID: 1af4 00:08:37.589 Serial Number: 12340 00:08:37.589 Model Number: QEMU NVMe Ctrl 00:08:37.589 Firmware Version: 8.0.0 00:08:37.589 Recommended Arb Burst: 6 00:08:37.589 IEEE OUI Identifier: 00 54 52 00:08:37.589 Multi-path I/O 00:08:37.589 May have multiple subsystem ports: No 00:08:37.589 May have multiple controllers: No 00:08:37.589 Associated with SR-IOV VF: No 00:08:37.589 Max Data Transfer Size: 524288 00:08:37.589 Max Number of Namespaces: 256 00:08:37.589 Max Number of I/O Queues: 64 00:08:37.589 NVMe Specification Version (VS): 1.4 00:08:37.589 NVMe Specification Version (Identify): 1.4 00:08:37.589 Maximum Queue Entries: 2048 00:08:37.589 Contiguous Queues Required: Yes 00:08:37.589 Arbitration Mechanisms Supported 00:08:37.589 Weighted Round Robin: Not Supported 00:08:37.589 Vendor Specific: Not Supported 00:08:37.589 Reset Timeout: 7500 ms 00:08:37.589 Doorbell Stride: 4 bytes 00:08:37.589 NVM Subsystem Reset: Not Supported 00:08:37.589 Command Sets Supported 00:08:37.589 NVM Command Set: Supported 00:08:37.589 Boot Partition: Not Supported 00:08:37.589 Memory Page Size Minimum: 4096 bytes 00:08:37.589 Memory Page Size Maximum: 65536 bytes 00:08:37.589 Persistent Memory Region: Not Supported 00:08:37.590 Optional Asynchronous Events Supported 00:08:37.590 Namespace Attribute Notices: Supported 00:08:37.590 Firmware Activation Notices: Not Supported 00:08:37.590 ANA Change Notices: Not Supported 00:08:37.590 PLE Aggregate Log Change Notices: Not Supported 00:08:37.590 LBA Status Info Alert Notices: Not Supported 00:08:37.590 EGE Aggregate Log Change Notices: Not Supported 00:08:37.590 Normal NVM Subsystem Shutdown event: Not Supported 00:08:37.590 Zone Descriptor Change Notices: Not Supported 00:08:37.590 Discovery Log Change Notices: Not Supported 00:08:37.590 Controller Attributes 00:08:37.590 128-bit Host Identifier: Not Supported 00:08:37.590 Non-Operational Permissive Mode: Not Supported 00:08:37.590 NVM Sets: Not Supported 00:08:37.590 Read Recovery Levels: Not Supported 00:08:37.590 Endurance Groups: Not Supported 00:08:37.590 Predictable Latency Mode: Not Supported 00:08:37.590 Traffic Based Keep ALive: Not Supported 00:08:37.590 Namespace Granularity: Not Supported 00:08:37.590 SQ Associations: Not Supported 00:08:37.590 UUID List: Not Supported 00:08:37.590 Multi-Domain Subsystem: Not Supported 00:08:37.590 Fixed Capacity Management: Not Supported 00:08:37.590 Variable Capacity Management: Not Supported 00:08:37.590 Delete Endurance Group: Not Supported 00:08:37.590 Delete NVM Set: Not Supported 00:08:37.590 Extended LBA Formats Supported: Supported 00:08:37.590 Flexible Data Placement Supported: Not Supported 00:08:37.590 00:08:37.590 Controller Memory Buffer Support 00:08:37.590 ================================ 00:08:37.590 Supported: No 00:08:37.590 00:08:37.590 Persistent Memory Region Support 00:08:37.590 ================================ 00:08:37.590 Supported: No 00:08:37.590 00:08:37.590 Admin Command Set Attributes 00:08:37.590 ============================ 00:08:37.590 Security Send/Receive: Not Supported 00:08:37.590 Format NVM: Supported 00:08:37.590 Firmware Activate/Download: Not Supported 00:08:37.590 Namespace Management: Supported 00:08:37.590 Device Self-Test: Not Supported 00:08:37.590 Directives: Supported 00:08:37.590 NVMe-MI: Not Supported 00:08:37.590 Virtualization Management: Not Supported 00:08:37.590 Doorbell Buffer Config: Supported 00:08:37.590 Get LBA Status Capability: Not Supported 00:08:37.590 Command & Feature Lockdown Capability: Not Supported 00:08:37.590 Abort Command Limit: 4 00:08:37.590 Async Event Request Limit: 4 00:08:37.590 Number of Firmware Slots: N/A 00:08:37.590 Firmware Slot 1 Read-Only: N/A 00:08:37.590 Firmware Activation Without Reset: N/A 00:08:37.590 Multiple Update Detection Support: N/A 00:08:37.590 Firmware Update Granularity: No Information Provided 00:08:37.590 Per-Namespace SMART Log: Yes 00:08:37.590 Asymmetric Namespace Access Log Page: Not Supported 00:08:37.590 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:37.590 Command Effects Log Page: Supported 00:08:37.590 Get Log Page Extended Data: Supported 00:08:37.590 Telemetry Log Pages: Not Supported 00:08:37.590 Persistent Event Log Pages: Not Supported 00:08:37.590 Supported Log Pages Log Page: May Support 00:08:37.590 Commands Supported & Effects Log Page: Not Supported 00:08:37.590 Feature Identifiers & Effects Log Page:May Support 00:08:37.590 NVMe-MI Commands & Effects Log Page: May Support 00:08:37.590 Data Area 4 for Telemetry Log: Not Supported 00:08:37.590 Error Log Page Entries Supported: 1 00:08:37.590 Keep Alive: Not Supported 00:08:37.590 00:08:37.590 NVM Command Set Attributes 00:08:37.590 ========================== 00:08:37.590 Submission Queue Entry Size 00:08:37.590 Max: 64 00:08:37.590 Min: 64 00:08:37.590 Completion Queue Entry Size 00:08:37.590 Max: 16 00:08:37.590 Min: 16 00:08:37.590 Number of Namespaces: 256 00:08:37.590 Compare Command: Supported 00:08:37.590 Write Uncorrectable Command: Not Supported 00:08:37.590 Dataset Management Command: Supported 00:08:37.590 Write Zeroes Command: Supported 00:08:37.590 Set Features Save Field: Supported 00:08:37.590 Reservations: Not Supported 00:08:37.590 Timestamp: Supported 00:08:37.590 Copy: Supported 00:08:37.590 Volatile Write Cache: Present 00:08:37.590 Atomic Write Unit (Normal): 1 00:08:37.590 Atomic Write Unit (PFail): 1 00:08:37.590 Atomic Compare & Write Unit: 1 00:08:37.590 Fused Compare & Write: Not Supported 00:08:37.590 Scatter-Gather List 00:08:37.590 SGL Command Set: Supported 00:08:37.590 SGL Keyed: Not Supported 00:08:37.590 SGL Bit Bucket Descriptor: Not Supported 00:08:37.590 SGL Metadata Pointer: Not Supported 00:08:37.590 Oversized SGL: Not Supported 00:08:37.590 SGL Metadata Address: Not Supported 00:08:37.590 SGL Offset: Not Supported 00:08:37.590 Transport SGL Data Block: Not Supported 00:08:37.590 Replay Protected Memory Block: Not Supported 00:08:37.590 00:08:37.590 Firmware Slot Information 00:08:37.590 ========================= 00:08:37.590 Active slot: 1 00:08:37.590 Slot 1 Firmware Revision: 1.0 00:08:37.590 00:08:37.590 00:08:37.590 Commands Supported and Effects 00:08:37.590 ============================== 00:08:37.590 Admin Commands 00:08:37.590 -------------- 00:08:37.590 Delete I/O Submission Queue (00h): Supported 00:08:37.590 Create I/O Submission Queue (01h): Supported 00:08:37.590 Get Log Page (02h): Supported 00:08:37.590 Delete I/O Completion Queue (04h): Supported 00:08:37.590 Create I/O Completion Queue (05h): Supported 00:08:37.590 Identify (06h): Supported 00:08:37.590 Abort (08h): Supported 00:08:37.590 Set Features (09h): Supported 00:08:37.590 Get Features (0Ah): Supported 00:08:37.590 Asynchronous Event Request (0Ch): Supported 00:08:37.590 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:37.590 Directive Send (19h): Supported 00:08:37.590 Directive Receive (1Ah): Supported 00:08:37.590 Virtualization Management (1Ch): Supported 00:08:37.590 Doorbell Buffer Config (7Ch): Supported 00:08:37.590 Format NVM (80h): Supported LBA-Change 00:08:37.590 I/O Commands 00:08:37.590 ------------ 00:08:37.590 Flush (00h): Supported LBA-Change 00:08:37.590 Write (01h): Supported LBA-Change 00:08:37.590 Read (02h): Supported 00:08:37.590 Compare (05h): Supported 00:08:37.590 Write Zeroes (08h): Supported LBA-Change 00:08:37.590 Dataset Management (09h): Supported LBA-Change 00:08:37.590 Unknown (0Ch): Supported 00:08:37.590 Unknown (12h): Supported 00:08:37.590 Copy (19h): Supported LBA-Change 00:08:37.590 Unknown (1Dh): Supported LBA-Change 00:08:37.590 00:08:37.590 Error Log 00:08:37.590 ========= 00:08:37.590 00:08:37.590 Arbitration 00:08:37.590 =========== 00:08:37.590 Arbitration Burst: no limit 00:08:37.590 00:08:37.590 Power Management 00:08:37.590 ================ 00:08:37.590 Number of Power States: 1 00:08:37.590 Current Power State: Power State #0 00:08:37.590 Power State #0: 00:08:37.590 Max Power: 25.00 W 00:08:37.590 Non-Operational State: Operational 00:08:37.590 Entry Latency: 16 microseconds 00:08:37.590 Exit Latency: 4 microseconds 00:08:37.590 Relative Read Throughput: 0 00:08:37.590 Relative Read Latency: 0 00:08:37.590 Relative Write Throughput: 0 00:08:37.590 Relative Write Latency: 0 00:08:37.590 Idle Power: Not Reported 00:08:37.590 Active Power: Not Reported 00:08:37.590 Non-Operational Permissive Mode: Not Supported 00:08:37.590 00:08:37.590 Health Information 00:08:37.590 ================== 00:08:37.590 Critical Warnings: 00:08:37.590 Available Spare Space: OK 00:08:37.590 Temperature: OK 00:08:37.590 Device Reliability: OK 00:08:37.590 Read Only: No 00:08:37.590 Volatile Memory Backup: OK 00:08:37.590 Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.590 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:37.590 Available Spare: 0% 00:08:37.590 Available Spare Threshold: 0% 00:08:37.590 Life Percentage Used: 0% 00:08:37.590 Data Units Read: 662 00:08:37.590 Data Units Written: 590 00:08:37.590 Host Read Commands: 35492 00:08:37.590 Host Write Commands: 35278 00:08:37.590 Controller Busy Time: 0 minutes 00:08:37.590 Power Cycles: 0 00:08:37.590 Power On Hours: 0 hours 00:08:37.590 Unsafe Shutdowns: 0 00:08:37.590 Unrecoverable Media Errors: 0 00:08:37.590 Lifetime Error Log Entries: 0 00:08:37.590 Warning Temperature Time: 0 minutes 00:08:37.590 Critical Temperature Time: 0 minutes 00:08:37.590 00:08:37.590 Number of Queues 00:08:37.590 ================ 00:08:37.590 Number of I/O Submission Queues: 64 00:08:37.590 Number of I/O Completion Queues: 64 00:08:37.590 00:08:37.590 ZNS Specific Controller Data 00:08:37.590 ============================ 00:08:37.590 Zone Append Size Limit: 0 00:08:37.590 00:08:37.590 00:08:37.590 Active Namespaces 00:08:37.590 ================= 00:08:37.590 Namespace ID:1 00:08:37.590 Error Recovery Timeout: Unlimited 00:08:37.590 Command Set Identifier: NVM (00h) 00:08:37.590 Deallocate: Supported 00:08:37.590 Deallocated/Unwritten Error: Supported 00:08:37.590 Deallocated Read Value: All 0x00 00:08:37.590 Deallocate in Write Zeroes: Not Supported 00:08:37.591 Deallocated Guard Field: 0xFFFF 00:08:37.591 Flush: Supported 00:08:37.591 Reservation: Not Supported 00:08:37.591 Metadata Transferred as: Separate Metadata Buffer 00:08:37.591 Namespace Sharing Capabilities: Private 00:08:37.591 Size (in LBAs): 1548666 (5GiB) 00:08:37.591 Capacity (in LBAs): 1548666 (5GiB) 00:08:37.591 Utilization (in LBAs): 1548666 (5GiB) 00:08:37.591 Thin Provisioning: Not Supported 00:08:37.591 Per-NS Atomic Units: No 00:08:37.591 Maximum Single Source Range Length: 128 00:08:37.591 Maximum Copy Length: 128 00:08:37.591 Maximum Source Range Count: 128 00:08:37.591 NGUID/EUI64 Never Reused: No 00:08:37.591 Namespace Write Protected: No 00:08:37.591 Number of LBA Formats: 8 00:08:37.591 Current LBA Format: LBA Format #07 00:08:37.591 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:37.591 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:37.591 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:37.591 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:37.591 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:37.591 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:37.591 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:37.591 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:37.591 00:08:37.591 NVM Specific Namespace Data 00:08:37.591 =========================== 00:08:37.591 Logical Block Storage Tag Mask: 0 00:08:37.591 Protection Information Capabilities: 00:08:37.591 16b Guard Protection Information Storage Tag Support: No 00:08:37.591 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:37.591 Storage Tag Check Read Support: No 00:08:37.591 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.591 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.591 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.591 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.591 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.591 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.591 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.591 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.591 17:42:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:37.591 17:42:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:37.854 ===================================================== 00:08:37.854 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.854 ===================================================== 00:08:37.854 Controller Capabilities/Features 00:08:37.854 ================================ 00:08:37.854 Vendor ID: 1b36 00:08:37.854 Subsystem Vendor ID: 1af4 00:08:37.854 Serial Number: 12341 00:08:37.854 Model Number: QEMU NVMe Ctrl 00:08:37.854 Firmware Version: 8.0.0 00:08:37.854 Recommended Arb Burst: 6 00:08:37.854 IEEE OUI Identifier: 00 54 52 00:08:37.854 Multi-path I/O 00:08:37.854 May have multiple subsystem ports: No 00:08:37.854 May have multiple controllers: No 00:08:37.854 Associated with SR-IOV VF: No 00:08:37.854 Max Data Transfer Size: 524288 00:08:37.854 Max Number of Namespaces: 256 00:08:37.854 Max Number of I/O Queues: 64 00:08:37.854 NVMe Specification Version (VS): 1.4 00:08:37.854 NVMe Specification Version (Identify): 1.4 00:08:37.854 Maximum Queue Entries: 2048 00:08:37.854 Contiguous Queues Required: Yes 00:08:37.854 Arbitration Mechanisms Supported 00:08:37.854 Weighted Round Robin: Not Supported 00:08:37.854 Vendor Specific: Not Supported 00:08:37.854 Reset Timeout: 7500 ms 00:08:37.854 Doorbell Stride: 4 bytes 00:08:37.854 NVM Subsystem Reset: Not Supported 00:08:37.854 Command Sets Supported 00:08:37.854 NVM Command Set: Supported 00:08:37.854 Boot Partition: Not Supported 00:08:37.854 Memory Page Size Minimum: 4096 bytes 00:08:37.854 Memory Page Size Maximum: 65536 bytes 00:08:37.854 Persistent Memory Region: Not Supported 00:08:37.854 Optional Asynchronous Events Supported 00:08:37.854 Namespace Attribute Notices: Supported 00:08:37.854 Firmware Activation Notices: Not Supported 00:08:37.854 ANA Change Notices: Not Supported 00:08:37.854 PLE Aggregate Log Change Notices: Not Supported 00:08:37.854 LBA Status Info Alert Notices: Not Supported 00:08:37.854 EGE Aggregate Log Change Notices: Not Supported 00:08:37.854 Normal NVM Subsystem Shutdown event: Not Supported 00:08:37.854 Zone Descriptor Change Notices: Not Supported 00:08:37.854 Discovery Log Change Notices: Not Supported 00:08:37.854 Controller Attributes 00:08:37.854 128-bit Host Identifier: Not Supported 00:08:37.854 Non-Operational Permissive Mode: Not Supported 00:08:37.854 NVM Sets: Not Supported 00:08:37.854 Read Recovery Levels: Not Supported 00:08:37.854 Endurance Groups: Not Supported 00:08:37.854 Predictable Latency Mode: Not Supported 00:08:37.854 Traffic Based Keep ALive: Not Supported 00:08:37.854 Namespace Granularity: Not Supported 00:08:37.854 SQ Associations: Not Supported 00:08:37.854 UUID List: Not Supported 00:08:37.854 Multi-Domain Subsystem: Not Supported 00:08:37.854 Fixed Capacity Management: Not Supported 00:08:37.854 Variable Capacity Management: Not Supported 00:08:37.854 Delete Endurance Group: Not Supported 00:08:37.854 Delete NVM Set: Not Supported 00:08:37.854 Extended LBA Formats Supported: Supported 00:08:37.854 Flexible Data Placement Supported: Not Supported 00:08:37.854 00:08:37.854 Controller Memory Buffer Support 00:08:37.854 ================================ 00:08:37.854 Supported: No 00:08:37.854 00:08:37.854 Persistent Memory Region Support 00:08:37.854 ================================ 00:08:37.854 Supported: No 00:08:37.854 00:08:37.854 Admin Command Set Attributes 00:08:37.854 ============================ 00:08:37.854 Security Send/Receive: Not Supported 00:08:37.854 Format NVM: Supported 00:08:37.854 Firmware Activate/Download: Not Supported 00:08:37.854 Namespace Management: Supported 00:08:37.854 Device Self-Test: Not Supported 00:08:37.854 Directives: Supported 00:08:37.854 NVMe-MI: Not Supported 00:08:37.854 Virtualization Management: Not Supported 00:08:37.854 Doorbell Buffer Config: Supported 00:08:37.854 Get LBA Status Capability: Not Supported 00:08:37.854 Command & Feature Lockdown Capability: Not Supported 00:08:37.855 Abort Command Limit: 4 00:08:37.855 Async Event Request Limit: 4 00:08:37.855 Number of Firmware Slots: N/A 00:08:37.855 Firmware Slot 1 Read-Only: N/A 00:08:37.855 Firmware Activation Without Reset: N/A 00:08:37.855 Multiple Update Detection Support: N/A 00:08:37.855 Firmware Update Granularity: No Information Provided 00:08:37.855 Per-Namespace SMART Log: Yes 00:08:37.855 Asymmetric Namespace Access Log Page: Not Supported 00:08:37.855 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:37.855 Command Effects Log Page: Supported 00:08:37.855 Get Log Page Extended Data: Supported 00:08:37.855 Telemetry Log Pages: Not Supported 00:08:37.855 Persistent Event Log Pages: Not Supported 00:08:37.855 Supported Log Pages Log Page: May Support 00:08:37.855 Commands Supported & Effects Log Page: Not Supported 00:08:37.855 Feature Identifiers & Effects Log Page:May Support 00:08:37.855 NVMe-MI Commands & Effects Log Page: May Support 00:08:37.855 Data Area 4 for Telemetry Log: Not Supported 00:08:37.855 Error Log Page Entries Supported: 1 00:08:37.855 Keep Alive: Not Supported 00:08:37.855 00:08:37.855 NVM Command Set Attributes 00:08:37.855 ========================== 00:08:37.855 Submission Queue Entry Size 00:08:37.855 Max: 64 00:08:37.855 Min: 64 00:08:37.855 Completion Queue Entry Size 00:08:37.855 Max: 16 00:08:37.855 Min: 16 00:08:37.855 Number of Namespaces: 256 00:08:37.855 Compare Command: Supported 00:08:37.855 Write Uncorrectable Command: Not Supported 00:08:37.855 Dataset Management Command: Supported 00:08:37.855 Write Zeroes Command: Supported 00:08:37.855 Set Features Save Field: Supported 00:08:37.855 Reservations: Not Supported 00:08:37.855 Timestamp: Supported 00:08:37.855 Copy: Supported 00:08:37.855 Volatile Write Cache: Present 00:08:37.855 Atomic Write Unit (Normal): 1 00:08:37.855 Atomic Write Unit (PFail): 1 00:08:37.855 Atomic Compare & Write Unit: 1 00:08:37.855 Fused Compare & Write: Not Supported 00:08:37.855 Scatter-Gather List 00:08:37.855 SGL Command Set: Supported 00:08:37.855 SGL Keyed: Not Supported 00:08:37.855 SGL Bit Bucket Descriptor: Not Supported 00:08:37.855 SGL Metadata Pointer: Not Supported 00:08:37.855 Oversized SGL: Not Supported 00:08:37.855 SGL Metadata Address: Not Supported 00:08:37.855 SGL Offset: Not Supported 00:08:37.855 Transport SGL Data Block: Not Supported 00:08:37.855 Replay Protected Memory Block: Not Supported 00:08:37.855 00:08:37.855 Firmware Slot Information 00:08:37.855 ========================= 00:08:37.855 Active slot: 1 00:08:37.855 Slot 1 Firmware Revision: 1.0 00:08:37.855 00:08:37.855 00:08:37.855 Commands Supported and Effects 00:08:37.855 ============================== 00:08:37.855 Admin Commands 00:08:37.855 -------------- 00:08:37.855 Delete I/O Submission Queue (00h): Supported 00:08:37.855 Create I/O Submission Queue (01h): Supported 00:08:37.855 Get Log Page (02h): Supported 00:08:37.855 Delete I/O Completion Queue (04h): Supported 00:08:37.855 Create I/O Completion Queue (05h): Supported 00:08:37.855 Identify (06h): Supported 00:08:37.855 Abort (08h): Supported 00:08:37.855 Set Features (09h): Supported 00:08:37.855 Get Features (0Ah): Supported 00:08:37.855 Asynchronous Event Request (0Ch): Supported 00:08:37.855 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:37.855 Directive Send (19h): Supported 00:08:37.855 Directive Receive (1Ah): Supported 00:08:37.855 Virtualization Management (1Ch): Supported 00:08:37.855 Doorbell Buffer Config (7Ch): Supported 00:08:37.855 Format NVM (80h): Supported LBA-Change 00:08:37.855 I/O Commands 00:08:37.855 ------------ 00:08:37.855 Flush (00h): Supported LBA-Change 00:08:37.855 Write (01h): Supported LBA-Change 00:08:37.855 Read (02h): Supported 00:08:37.855 Compare (05h): Supported 00:08:37.855 Write Zeroes (08h): Supported LBA-Change 00:08:37.855 Dataset Management (09h): Supported LBA-Change 00:08:37.855 Unknown (0Ch): Supported 00:08:37.855 Unknown (12h): Supported 00:08:37.855 Copy (19h): Supported LBA-Change 00:08:37.855 Unknown (1Dh): Supported LBA-Change 00:08:37.855 00:08:37.855 Error Log 00:08:37.855 ========= 00:08:37.855 00:08:37.855 Arbitration 00:08:37.855 =========== 00:08:37.855 Arbitration Burst: no limit 00:08:37.855 00:08:37.855 Power Management 00:08:37.855 ================ 00:08:37.855 Number of Power States: 1 00:08:37.855 Current Power State: Power State #0 00:08:37.855 Power State #0: 00:08:37.855 Max Power: 25.00 W 00:08:37.855 Non-Operational State: Operational 00:08:37.855 Entry Latency: 16 microseconds 00:08:37.855 Exit Latency: 4 microseconds 00:08:37.855 Relative Read Throughput: 0 00:08:37.855 Relative Read Latency: 0 00:08:37.855 Relative Write Throughput: 0 00:08:37.855 Relative Write Latency: 0 00:08:37.855 Idle Power: Not Reported 00:08:37.855 Active Power: Not Reported 00:08:37.855 Non-Operational Permissive Mode: Not Supported 00:08:37.855 00:08:37.855 Health Information 00:08:37.855 ================== 00:08:37.855 Critical Warnings: 00:08:37.855 Available Spare Space: OK 00:08:37.855 Temperature: OK 00:08:37.855 Device Reliability: OK 00:08:37.855 Read Only: No 00:08:37.855 Volatile Memory Backup: OK 00:08:37.855 Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.855 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:37.855 Available Spare: 0% 00:08:37.855 Available Spare Threshold: 0% 00:08:37.855 Life Percentage Used: 0% 00:08:37.855 Data Units Read: 1038 00:08:37.855 Data Units Written: 904 00:08:37.855 Host Read Commands: 52562 00:08:37.855 Host Write Commands: 51356 00:08:37.855 Controller Busy Time: 0 minutes 00:08:37.855 Power Cycles: 0 00:08:37.855 Power On Hours: 0 hours 00:08:37.855 Unsafe Shutdowns: 0 00:08:37.855 Unrecoverable Media Errors: 0 00:08:37.855 Lifetime Error Log Entries: 0 00:08:37.855 Warning Temperature Time: 0 minutes 00:08:37.855 Critical Temperature Time: 0 minutes 00:08:37.855 00:08:37.855 Number of Queues 00:08:37.855 ================ 00:08:37.855 Number of I/O Submission Queues: 64 00:08:37.855 Number of I/O Completion Queues: 64 00:08:37.855 00:08:37.855 ZNS Specific Controller Data 00:08:37.855 ============================ 00:08:37.855 Zone Append Size Limit: 0 00:08:37.855 00:08:37.855 00:08:37.855 Active Namespaces 00:08:37.855 ================= 00:08:37.855 Namespace ID:1 00:08:37.855 Error Recovery Timeout: Unlimited 00:08:37.855 Command Set Identifier: NVM (00h) 00:08:37.855 Deallocate: Supported 00:08:37.855 Deallocated/Unwritten Error: Supported 00:08:37.855 Deallocated Read Value: All 0x00 00:08:37.855 Deallocate in Write Zeroes: Not Supported 00:08:37.855 Deallocated Guard Field: 0xFFFF 00:08:37.855 Flush: Supported 00:08:37.855 Reservation: Not Supported 00:08:37.855 Namespace Sharing Capabilities: Private 00:08:37.855 Size (in LBAs): 1310720 (5GiB) 00:08:37.855 Capacity (in LBAs): 1310720 (5GiB) 00:08:37.855 Utilization (in LBAs): 1310720 (5GiB) 00:08:37.855 Thin Provisioning: Not Supported 00:08:37.855 Per-NS Atomic Units: No 00:08:37.855 Maximum Single Source Range Length: 128 00:08:37.855 Maximum Copy Length: 128 00:08:37.855 Maximum Source Range Count: 128 00:08:37.855 NGUID/EUI64 Never Reused: No 00:08:37.855 Namespace Write Protected: No 00:08:37.855 Number of LBA Formats: 8 00:08:37.855 Current LBA Format: LBA Format #04 00:08:37.855 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:37.855 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:37.855 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:37.855 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:37.855 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:37.855 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:37.855 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:37.855 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:37.855 00:08:37.855 NVM Specific Namespace Data 00:08:37.855 =========================== 00:08:37.855 Logical Block Storage Tag Mask: 0 00:08:37.855 Protection Information Capabilities: 00:08:37.855 16b Guard Protection Information Storage Tag Support: No 00:08:37.855 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:37.855 Storage Tag Check Read Support: No 00:08:37.855 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.855 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.855 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.855 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.855 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.855 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.855 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.855 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.855 17:42:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:37.855 17:42:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:37.855 ===================================================== 00:08:37.856 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.856 ===================================================== 00:08:37.856 Controller Capabilities/Features 00:08:37.856 ================================ 00:08:37.856 Vendor ID: 1b36 00:08:37.856 Subsystem Vendor ID: 1af4 00:08:37.856 Serial Number: 12342 00:08:37.856 Model Number: QEMU NVMe Ctrl 00:08:37.856 Firmware Version: 8.0.0 00:08:37.856 Recommended Arb Burst: 6 00:08:37.856 IEEE OUI Identifier: 00 54 52 00:08:37.856 Multi-path I/O 00:08:37.856 May have multiple subsystem ports: No 00:08:37.856 May have multiple controllers: No 00:08:37.856 Associated with SR-IOV VF: No 00:08:37.856 Max Data Transfer Size: 524288 00:08:37.856 Max Number of Namespaces: 256 00:08:37.856 Max Number of I/O Queues: 64 00:08:37.856 NVMe Specification Version (VS): 1.4 00:08:37.856 NVMe Specification Version (Identify): 1.4 00:08:37.856 Maximum Queue Entries: 2048 00:08:37.856 Contiguous Queues Required: Yes 00:08:37.856 Arbitration Mechanisms Supported 00:08:37.856 Weighted Round Robin: Not Supported 00:08:37.856 Vendor Specific: Not Supported 00:08:37.856 Reset Timeout: 7500 ms 00:08:37.856 Doorbell Stride: 4 bytes 00:08:37.856 NVM Subsystem Reset: Not Supported 00:08:37.856 Command Sets Supported 00:08:37.856 NVM Command Set: Supported 00:08:37.856 Boot Partition: Not Supported 00:08:37.856 Memory Page Size Minimum: 4096 bytes 00:08:37.856 Memory Page Size Maximum: 65536 bytes 00:08:37.856 Persistent Memory Region: Not Supported 00:08:37.856 Optional Asynchronous Events Supported 00:08:37.856 Namespace Attribute Notices: Supported 00:08:37.856 Firmware Activation Notices: Not Supported 00:08:37.856 ANA Change Notices: Not Supported 00:08:37.856 PLE Aggregate Log Change Notices: Not Supported 00:08:37.856 LBA Status Info Alert Notices: Not Supported 00:08:37.856 EGE Aggregate Log Change Notices: Not Supported 00:08:37.856 Normal NVM Subsystem Shutdown event: Not Supported 00:08:37.856 Zone Descriptor Change Notices: Not Supported 00:08:37.856 Discovery Log Change Notices: Not Supported 00:08:37.856 Controller Attributes 00:08:37.856 128-bit Host Identifier: Not Supported 00:08:37.856 Non-Operational Permissive Mode: Not Supported 00:08:37.856 NVM Sets: Not Supported 00:08:37.856 Read Recovery Levels: Not Supported 00:08:37.856 Endurance Groups: Not Supported 00:08:37.856 Predictable Latency Mode: Not Supported 00:08:37.856 Traffic Based Keep ALive: Not Supported 00:08:37.856 Namespace Granularity: Not Supported 00:08:37.856 SQ Associations: Not Supported 00:08:37.856 UUID List: Not Supported 00:08:37.856 Multi-Domain Subsystem: Not Supported 00:08:37.856 Fixed Capacity Management: Not Supported 00:08:37.856 Variable Capacity Management: Not Supported 00:08:37.856 Delete Endurance Group: Not Supported 00:08:37.856 Delete NVM Set: Not Supported 00:08:37.856 Extended LBA Formats Supported: Supported 00:08:37.856 Flexible Data Placement Supported: Not Supported 00:08:37.856 00:08:37.856 Controller Memory Buffer Support 00:08:37.856 ================================ 00:08:37.856 Supported: No 00:08:37.856 00:08:37.856 Persistent Memory Region Support 00:08:37.856 ================================ 00:08:37.856 Supported: No 00:08:37.856 00:08:37.856 Admin Command Set Attributes 00:08:37.856 ============================ 00:08:37.856 Security Send/Receive: Not Supported 00:08:37.856 Format NVM: Supported 00:08:37.856 Firmware Activate/Download: Not Supported 00:08:37.856 Namespace Management: Supported 00:08:37.856 Device Self-Test: Not Supported 00:08:37.856 Directives: Supported 00:08:37.856 NVMe-MI: Not Supported 00:08:37.856 Virtualization Management: Not Supported 00:08:37.856 Doorbell Buffer Config: Supported 00:08:37.856 Get LBA Status Capability: Not Supported 00:08:37.856 Command & Feature Lockdown Capability: Not Supported 00:08:37.856 Abort Command Limit: 4 00:08:37.856 Async Event Request Limit: 4 00:08:37.856 Number of Firmware Slots: N/A 00:08:37.856 Firmware Slot 1 Read-Only: N/A 00:08:37.856 Firmware Activation Without Reset: N/A 00:08:37.856 Multiple Update Detection Support: N/A 00:08:37.856 Firmware Update Granularity: No Information Provided 00:08:37.856 Per-Namespace SMART Log: Yes 00:08:37.856 Asymmetric Namespace Access Log Page: Not Supported 00:08:37.856 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:37.856 Command Effects Log Page: Supported 00:08:37.856 Get Log Page Extended Data: Supported 00:08:37.856 Telemetry Log Pages: Not Supported 00:08:37.856 Persistent Event Log Pages: Not Supported 00:08:37.856 Supported Log Pages Log Page: May Support 00:08:37.856 Commands Supported & Effects Log Page: Not Supported 00:08:37.856 Feature Identifiers & Effects Log Page:May Support 00:08:37.856 NVMe-MI Commands & Effects Log Page: May Support 00:08:37.856 Data Area 4 for Telemetry Log: Not Supported 00:08:37.856 Error Log Page Entries Supported: 1 00:08:37.856 Keep Alive: Not Supported 00:08:37.856 00:08:37.856 NVM Command Set Attributes 00:08:37.856 ========================== 00:08:37.856 Submission Queue Entry Size 00:08:37.856 Max: 64 00:08:37.856 Min: 64 00:08:37.856 Completion Queue Entry Size 00:08:37.856 Max: 16 00:08:37.856 Min: 16 00:08:37.856 Number of Namespaces: 256 00:08:37.856 Compare Command: Supported 00:08:37.856 Write Uncorrectable Command: Not Supported 00:08:37.856 Dataset Management Command: Supported 00:08:37.856 Write Zeroes Command: Supported 00:08:37.856 Set Features Save Field: Supported 00:08:37.856 Reservations: Not Supported 00:08:37.856 Timestamp: Supported 00:08:37.856 Copy: Supported 00:08:37.856 Volatile Write Cache: Present 00:08:37.856 Atomic Write Unit (Normal): 1 00:08:37.856 Atomic Write Unit (PFail): 1 00:08:37.856 Atomic Compare & Write Unit: 1 00:08:37.856 Fused Compare & Write: Not Supported 00:08:37.856 Scatter-Gather List 00:08:37.856 SGL Command Set: Supported 00:08:37.856 SGL Keyed: Not Supported 00:08:37.856 SGL Bit Bucket Descriptor: Not Supported 00:08:37.856 SGL Metadata Pointer: Not Supported 00:08:37.856 Oversized SGL: Not Supported 00:08:37.856 SGL Metadata Address: Not Supported 00:08:37.856 SGL Offset: Not Supported 00:08:37.856 Transport SGL Data Block: Not Supported 00:08:37.856 Replay Protected Memory Block: Not Supported 00:08:37.856 00:08:37.856 Firmware Slot Information 00:08:37.856 ========================= 00:08:37.856 Active slot: 1 00:08:37.856 Slot 1 Firmware Revision: 1.0 00:08:37.856 00:08:37.856 00:08:37.856 Commands Supported and Effects 00:08:37.856 ============================== 00:08:37.856 Admin Commands 00:08:37.856 -------------- 00:08:37.856 Delete I/O Submission Queue (00h): Supported 00:08:37.856 Create I/O Submission Queue (01h): Supported 00:08:37.856 Get Log Page (02h): Supported 00:08:37.856 Delete I/O Completion Queue (04h): Supported 00:08:37.856 Create I/O Completion Queue (05h): Supported 00:08:37.856 Identify (06h): Supported 00:08:37.856 Abort (08h): Supported 00:08:37.856 Set Features (09h): Supported 00:08:37.856 Get Features (0Ah): Supported 00:08:37.856 Asynchronous Event Request (0Ch): Supported 00:08:37.856 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:37.856 Directive Send (19h): Supported 00:08:37.856 Directive Receive (1Ah): Supported 00:08:37.856 Virtualization Management (1Ch): Supported 00:08:37.856 Doorbell Buffer Config (7Ch): Supported 00:08:37.856 Format NVM (80h): Supported LBA-Change 00:08:37.856 I/O Commands 00:08:37.856 ------------ 00:08:37.856 Flush (00h): Supported LBA-Change 00:08:37.856 Write (01h): Supported LBA-Change 00:08:37.856 Read (02h): Supported 00:08:37.856 Compare (05h): Supported 00:08:37.856 Write Zeroes (08h): Supported LBA-Change 00:08:37.856 Dataset Management (09h): Supported LBA-Change 00:08:37.856 Unknown (0Ch): Supported 00:08:37.856 Unknown (12h): Supported 00:08:37.856 Copy (19h): Supported LBA-Change 00:08:37.856 Unknown (1Dh): Supported LBA-Change 00:08:37.856 00:08:37.856 Error Log 00:08:37.856 ========= 00:08:37.856 00:08:37.856 Arbitration 00:08:37.856 =========== 00:08:37.856 Arbitration Burst: no limit 00:08:37.856 00:08:37.856 Power Management 00:08:37.856 ================ 00:08:37.856 Number of Power States: 1 00:08:37.856 Current Power State: Power State #0 00:08:37.856 Power State #0: 00:08:37.856 Max Power: 25.00 W 00:08:37.856 Non-Operational State: Operational 00:08:37.856 Entry Latency: 16 microseconds 00:08:37.856 Exit Latency: 4 microseconds 00:08:37.856 Relative Read Throughput: 0 00:08:37.856 Relative Read Latency: 0 00:08:37.856 Relative Write Throughput: 0 00:08:37.856 Relative Write Latency: 0 00:08:37.856 Idle Power: Not Reported 00:08:37.856 Active Power: Not Reported 00:08:37.856 Non-Operational Permissive Mode: Not Supported 00:08:37.856 00:08:37.856 Health Information 00:08:37.856 ================== 00:08:37.856 Critical Warnings: 00:08:37.856 Available Spare Space: OK 00:08:37.856 Temperature: OK 00:08:37.857 Device Reliability: OK 00:08:37.857 Read Only: No 00:08:37.857 Volatile Memory Backup: OK 00:08:37.857 Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.857 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:37.857 Available Spare: 0% 00:08:37.857 Available Spare Threshold: 0% 00:08:37.857 Life Percentage Used: 0% 00:08:37.857 Data Units Read: 1992 00:08:37.857 Data Units Written: 1779 00:08:37.857 Host Read Commands: 107359 00:08:37.857 Host Write Commands: 105628 00:08:37.857 Controller Busy Time: 0 minutes 00:08:37.857 Power Cycles: 0 00:08:37.857 Power On Hours: 0 hours 00:08:37.857 Unsafe Shutdowns: 0 00:08:37.857 Unrecoverable Media Errors: 0 00:08:37.857 Lifetime Error Log Entries: 0 00:08:37.857 Warning Temperature Time: 0 minutes 00:08:37.857 Critical Temperature Time: 0 minutes 00:08:37.857 00:08:37.857 Number of Queues 00:08:37.857 ================ 00:08:37.857 Number of I/O Submission Queues: 64 00:08:37.857 Number of I/O Completion Queues: 64 00:08:37.857 00:08:37.857 ZNS Specific Controller Data 00:08:37.857 ============================ 00:08:37.857 Zone Append Size Limit: 0 00:08:37.857 00:08:37.857 00:08:37.857 Active Namespaces 00:08:37.857 ================= 00:08:37.857 Namespace ID:1 00:08:37.857 Error Recovery Timeout: Unlimited 00:08:37.857 Command Set Identifier: NVM (00h) 00:08:37.857 Deallocate: Supported 00:08:37.857 Deallocated/Unwritten Error: Supported 00:08:37.857 Deallocated Read Value: All 0x00 00:08:37.857 Deallocate in Write Zeroes: Not Supported 00:08:37.857 Deallocated Guard Field: 0xFFFF 00:08:37.857 Flush: Supported 00:08:37.857 Reservation: Not Supported 00:08:37.857 Namespace Sharing Capabilities: Private 00:08:37.857 Size (in LBAs): 1048576 (4GiB) 00:08:37.857 Capacity (in LBAs): 1048576 (4GiB) 00:08:37.857 Utilization (in LBAs): 1048576 (4GiB) 00:08:37.857 Thin Provisioning: Not Supported 00:08:37.857 Per-NS Atomic Units: No 00:08:37.857 Maximum Single Source Range Length: 128 00:08:37.857 Maximum Copy Length: 128 00:08:37.857 Maximum Source Range Count: 128 00:08:37.857 NGUID/EUI64 Never Reused: No 00:08:37.857 Namespace Write Protected: No 00:08:37.857 Number of LBA Formats: 8 00:08:37.857 Current LBA Format: LBA Format #04 00:08:37.857 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:37.857 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:37.857 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:37.857 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:37.857 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:37.857 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:37.857 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:37.857 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:37.857 00:08:37.857 NVM Specific Namespace Data 00:08:37.857 =========================== 00:08:37.857 Logical Block Storage Tag Mask: 0 00:08:37.857 Protection Information Capabilities: 00:08:37.857 16b Guard Protection Information Storage Tag Support: No 00:08:37.857 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:37.857 Storage Tag Check Read Support: No 00:08:37.857 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Namespace ID:2 00:08:37.857 Error Recovery Timeout: Unlimited 00:08:37.857 Command Set Identifier: NVM (00h) 00:08:37.857 Deallocate: Supported 00:08:37.857 Deallocated/Unwritten Error: Supported 00:08:37.857 Deallocated Read Value: All 0x00 00:08:37.857 Deallocate in Write Zeroes: Not Supported 00:08:37.857 Deallocated Guard Field: 0xFFFF 00:08:37.857 Flush: Supported 00:08:37.857 Reservation: Not Supported 00:08:37.857 Namespace Sharing Capabilities: Private 00:08:37.857 Size (in LBAs): 1048576 (4GiB) 00:08:37.857 Capacity (in LBAs): 1048576 (4GiB) 00:08:37.857 Utilization (in LBAs): 1048576 (4GiB) 00:08:37.857 Thin Provisioning: Not Supported 00:08:37.857 Per-NS Atomic Units: No 00:08:37.857 Maximum Single Source Range Length: 128 00:08:37.857 Maximum Copy Length: 128 00:08:37.857 Maximum Source Range Count: 128 00:08:37.857 NGUID/EUI64 Never Reused: No 00:08:37.857 Namespace Write Protected: No 00:08:37.857 Number of LBA Formats: 8 00:08:37.857 Current LBA Format: LBA Format #04 00:08:37.857 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:37.857 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:37.857 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:37.857 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:37.857 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:37.857 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:37.857 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:37.857 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:37.857 00:08:37.857 NVM Specific Namespace Data 00:08:37.857 =========================== 00:08:37.857 Logical Block Storage Tag Mask: 0 00:08:37.857 Protection Information Capabilities: 00:08:37.857 16b Guard Protection Information Storage Tag Support: No 00:08:37.857 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:37.857 Storage Tag Check Read Support: No 00:08:37.857 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:37.857 Namespace ID:3 00:08:37.857 Error Recovery Timeout: Unlimited 00:08:37.857 Command Set Identifier: NVM (00h) 00:08:37.857 Deallocate: Supported 00:08:37.857 Deallocated/Unwritten Error: Supported 00:08:37.857 Deallocated Read Value: All 0x00 00:08:37.857 Deallocate in Write Zeroes: Not Supported 00:08:37.857 Deallocated Guard Field: 0xFFFF 00:08:37.857 Flush: Supported 00:08:37.857 Reservation: Not Supported 00:08:37.857 Namespace Sharing Capabilities: Private 00:08:37.857 Size (in LBAs): 1048576 (4GiB) 00:08:37.857 Capacity (in LBAs): 1048576 (4GiB) 00:08:37.857 Utilization (in LBAs): 1048576 (4GiB) 00:08:37.857 Thin Provisioning: Not Supported 00:08:37.857 Per-NS Atomic Units: No 00:08:37.857 Maximum Single Source Range Length: 128 00:08:37.857 Maximum Copy Length: 128 00:08:37.857 Maximum Source Range Count: 128 00:08:37.857 NGUID/EUI64 Never Reused: No 00:08:37.857 Namespace Write Protected: No 00:08:37.857 Number of LBA Formats: 8 00:08:37.857 Current LBA Format: LBA Format #04 00:08:37.857 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:37.857 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:37.857 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:37.857 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:37.857 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:37.857 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:37.857 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:37.857 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:37.857 00:08:37.857 NVM Specific Namespace Data 00:08:37.857 =========================== 00:08:37.857 Logical Block Storage Tag Mask: 0 00:08:37.857 Protection Information Capabilities: 00:08:37.857 16b Guard Protection Information Storage Tag Support: No 00:08:37.857 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:38.120 Storage Tag Check Read Support: No 00:08:38.120 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.120 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.120 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.120 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.120 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.120 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.120 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.120 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.120 17:42:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:38.120 17:42:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:38.120 ===================================================== 00:08:38.120 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:38.120 ===================================================== 00:08:38.120 Controller Capabilities/Features 00:08:38.120 ================================ 00:08:38.120 Vendor ID: 1b36 00:08:38.120 Subsystem Vendor ID: 1af4 00:08:38.120 Serial Number: 12343 00:08:38.120 Model Number: QEMU NVMe Ctrl 00:08:38.120 Firmware Version: 8.0.0 00:08:38.120 Recommended Arb Burst: 6 00:08:38.120 IEEE OUI Identifier: 00 54 52 00:08:38.120 Multi-path I/O 00:08:38.120 May have multiple subsystem ports: No 00:08:38.120 May have multiple controllers: Yes 00:08:38.120 Associated with SR-IOV VF: No 00:08:38.120 Max Data Transfer Size: 524288 00:08:38.120 Max Number of Namespaces: 256 00:08:38.120 Max Number of I/O Queues: 64 00:08:38.120 NVMe Specification Version (VS): 1.4 00:08:38.120 NVMe Specification Version (Identify): 1.4 00:08:38.120 Maximum Queue Entries: 2048 00:08:38.120 Contiguous Queues Required: Yes 00:08:38.120 Arbitration Mechanisms Supported 00:08:38.120 Weighted Round Robin: Not Supported 00:08:38.120 Vendor Specific: Not Supported 00:08:38.120 Reset Timeout: 7500 ms 00:08:38.120 Doorbell Stride: 4 bytes 00:08:38.120 NVM Subsystem Reset: Not Supported 00:08:38.120 Command Sets Supported 00:08:38.120 NVM Command Set: Supported 00:08:38.120 Boot Partition: Not Supported 00:08:38.120 Memory Page Size Minimum: 4096 bytes 00:08:38.120 Memory Page Size Maximum: 65536 bytes 00:08:38.120 Persistent Memory Region: Not Supported 00:08:38.120 Optional Asynchronous Events Supported 00:08:38.120 Namespace Attribute Notices: Supported 00:08:38.120 Firmware Activation Notices: Not Supported 00:08:38.120 ANA Change Notices: Not Supported 00:08:38.120 PLE Aggregate Log Change Notices: Not Supported 00:08:38.120 LBA Status Info Alert Notices: Not Supported 00:08:38.120 EGE Aggregate Log Change Notices: Not Supported 00:08:38.120 Normal NVM Subsystem Shutdown event: Not Supported 00:08:38.120 Zone Descriptor Change Notices: Not Supported 00:08:38.120 Discovery Log Change Notices: Not Supported 00:08:38.120 Controller Attributes 00:08:38.120 128-bit Host Identifier: Not Supported 00:08:38.120 Non-Operational Permissive Mode: Not Supported 00:08:38.120 NVM Sets: Not Supported 00:08:38.120 Read Recovery Levels: Not Supported 00:08:38.120 Endurance Groups: Supported 00:08:38.120 Predictable Latency Mode: Not Supported 00:08:38.120 Traffic Based Keep ALive: Not Supported 00:08:38.120 Namespace Granularity: Not Supported 00:08:38.120 SQ Associations: Not Supported 00:08:38.120 UUID List: Not Supported 00:08:38.120 Multi-Domain Subsystem: Not Supported 00:08:38.120 Fixed Capacity Management: Not Supported 00:08:38.120 Variable Capacity Management: Not Supported 00:08:38.120 Delete Endurance Group: Not Supported 00:08:38.120 Delete NVM Set: Not Supported 00:08:38.120 Extended LBA Formats Supported: Supported 00:08:38.120 Flexible Data Placement Supported: Supported 00:08:38.120 00:08:38.120 Controller Memory Buffer Support 00:08:38.120 ================================ 00:08:38.120 Supported: No 00:08:38.120 00:08:38.120 Persistent Memory Region Support 00:08:38.120 ================================ 00:08:38.120 Supported: No 00:08:38.120 00:08:38.120 Admin Command Set Attributes 00:08:38.120 ============================ 00:08:38.120 Security Send/Receive: Not Supported 00:08:38.120 Format NVM: Supported 00:08:38.120 Firmware Activate/Download: Not Supported 00:08:38.120 Namespace Management: Supported 00:08:38.120 Device Self-Test: Not Supported 00:08:38.120 Directives: Supported 00:08:38.120 NVMe-MI: Not Supported 00:08:38.120 Virtualization Management: Not Supported 00:08:38.120 Doorbell Buffer Config: Supported 00:08:38.120 Get LBA Status Capability: Not Supported 00:08:38.120 Command & Feature Lockdown Capability: Not Supported 00:08:38.120 Abort Command Limit: 4 00:08:38.120 Async Event Request Limit: 4 00:08:38.120 Number of Firmware Slots: N/A 00:08:38.120 Firmware Slot 1 Read-Only: N/A 00:08:38.120 Firmware Activation Without Reset: N/A 00:08:38.120 Multiple Update Detection Support: N/A 00:08:38.120 Firmware Update Granularity: No Information Provided 00:08:38.120 Per-Namespace SMART Log: Yes 00:08:38.120 Asymmetric Namespace Access Log Page: Not Supported 00:08:38.120 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:38.120 Command Effects Log Page: Supported 00:08:38.120 Get Log Page Extended Data: Supported 00:08:38.120 Telemetry Log Pages: Not Supported 00:08:38.120 Persistent Event Log Pages: Not Supported 00:08:38.120 Supported Log Pages Log Page: May Support 00:08:38.120 Commands Supported & Effects Log Page: Not Supported 00:08:38.120 Feature Identifiers & Effects Log Page:May Support 00:08:38.120 NVMe-MI Commands & Effects Log Page: May Support 00:08:38.120 Data Area 4 for Telemetry Log: Not Supported 00:08:38.120 Error Log Page Entries Supported: 1 00:08:38.120 Keep Alive: Not Supported 00:08:38.120 00:08:38.120 NVM Command Set Attributes 00:08:38.120 ========================== 00:08:38.120 Submission Queue Entry Size 00:08:38.120 Max: 64 00:08:38.120 Min: 64 00:08:38.120 Completion Queue Entry Size 00:08:38.120 Max: 16 00:08:38.120 Min: 16 00:08:38.120 Number of Namespaces: 256 00:08:38.120 Compare Command: Supported 00:08:38.120 Write Uncorrectable Command: Not Supported 00:08:38.120 Dataset Management Command: Supported 00:08:38.120 Write Zeroes Command: Supported 00:08:38.120 Set Features Save Field: Supported 00:08:38.120 Reservations: Not Supported 00:08:38.120 Timestamp: Supported 00:08:38.120 Copy: Supported 00:08:38.120 Volatile Write Cache: Present 00:08:38.120 Atomic Write Unit (Normal): 1 00:08:38.120 Atomic Write Unit (PFail): 1 00:08:38.120 Atomic Compare & Write Unit: 1 00:08:38.120 Fused Compare & Write: Not Supported 00:08:38.120 Scatter-Gather List 00:08:38.120 SGL Command Set: Supported 00:08:38.120 SGL Keyed: Not Supported 00:08:38.120 SGL Bit Bucket Descriptor: Not Supported 00:08:38.120 SGL Metadata Pointer: Not Supported 00:08:38.120 Oversized SGL: Not Supported 00:08:38.120 SGL Metadata Address: Not Supported 00:08:38.121 SGL Offset: Not Supported 00:08:38.121 Transport SGL Data Block: Not Supported 00:08:38.121 Replay Protected Memory Block: Not Supported 00:08:38.121 00:08:38.121 Firmware Slot Information 00:08:38.121 ========================= 00:08:38.121 Active slot: 1 00:08:38.121 Slot 1 Firmware Revision: 1.0 00:08:38.121 00:08:38.121 00:08:38.121 Commands Supported and Effects 00:08:38.121 ============================== 00:08:38.121 Admin Commands 00:08:38.121 -------------- 00:08:38.121 Delete I/O Submission Queue (00h): Supported 00:08:38.121 Create I/O Submission Queue (01h): Supported 00:08:38.121 Get Log Page (02h): Supported 00:08:38.121 Delete I/O Completion Queue (04h): Supported 00:08:38.121 Create I/O Completion Queue (05h): Supported 00:08:38.121 Identify (06h): Supported 00:08:38.121 Abort (08h): Supported 00:08:38.121 Set Features (09h): Supported 00:08:38.121 Get Features (0Ah): Supported 00:08:38.121 Asynchronous Event Request (0Ch): Supported 00:08:38.121 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:38.121 Directive Send (19h): Supported 00:08:38.121 Directive Receive (1Ah): Supported 00:08:38.121 Virtualization Management (1Ch): Supported 00:08:38.121 Doorbell Buffer Config (7Ch): Supported 00:08:38.121 Format NVM (80h): Supported LBA-Change 00:08:38.121 I/O Commands 00:08:38.121 ------------ 00:08:38.121 Flush (00h): Supported LBA-Change 00:08:38.121 Write (01h): Supported LBA-Change 00:08:38.121 Read (02h): Supported 00:08:38.121 Compare (05h): Supported 00:08:38.121 Write Zeroes (08h): Supported LBA-Change 00:08:38.121 Dataset Management (09h): Supported LBA-Change 00:08:38.121 Unknown (0Ch): Supported 00:08:38.121 Unknown (12h): Supported 00:08:38.121 Copy (19h): Supported LBA-Change 00:08:38.121 Unknown (1Dh): Supported LBA-Change 00:08:38.121 00:08:38.121 Error Log 00:08:38.121 ========= 00:08:38.121 00:08:38.121 Arbitration 00:08:38.121 =========== 00:08:38.121 Arbitration Burst: no limit 00:08:38.121 00:08:38.121 Power Management 00:08:38.121 ================ 00:08:38.121 Number of Power States: 1 00:08:38.121 Current Power State: Power State #0 00:08:38.121 Power State #0: 00:08:38.121 Max Power: 25.00 W 00:08:38.121 Non-Operational State: Operational 00:08:38.121 Entry Latency: 16 microseconds 00:08:38.121 Exit Latency: 4 microseconds 00:08:38.121 Relative Read Throughput: 0 00:08:38.121 Relative Read Latency: 0 00:08:38.121 Relative Write Throughput: 0 00:08:38.121 Relative Write Latency: 0 00:08:38.121 Idle Power: Not Reported 00:08:38.121 Active Power: Not Reported 00:08:38.121 Non-Operational Permissive Mode: Not Supported 00:08:38.121 00:08:38.121 Health Information 00:08:38.121 ================== 00:08:38.121 Critical Warnings: 00:08:38.121 Available Spare Space: OK 00:08:38.121 Temperature: OK 00:08:38.121 Device Reliability: OK 00:08:38.121 Read Only: No 00:08:38.121 Volatile Memory Backup: OK 00:08:38.121 Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.121 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:38.121 Available Spare: 0% 00:08:38.121 Available Spare Threshold: 0% 00:08:38.121 Life Percentage Used: 0% 00:08:38.121 Data Units Read: 721 00:08:38.121 Data Units Written: 650 00:08:38.121 Host Read Commands: 36308 00:08:38.121 Host Write Commands: 35731 00:08:38.121 Controller Busy Time: 0 minutes 00:08:38.121 Power Cycles: 0 00:08:38.121 Power On Hours: 0 hours 00:08:38.121 Unsafe Shutdowns: 0 00:08:38.121 Unrecoverable Media Errors: 0 00:08:38.121 Lifetime Error Log Entries: 0 00:08:38.121 Warning Temperature Time: 0 minutes 00:08:38.121 Critical Temperature Time: 0 minutes 00:08:38.121 00:08:38.121 Number of Queues 00:08:38.121 ================ 00:08:38.121 Number of I/O Submission Queues: 64 00:08:38.121 Number of I/O Completion Queues: 64 00:08:38.121 00:08:38.121 ZNS Specific Controller Data 00:08:38.121 ============================ 00:08:38.121 Zone Append Size Limit: 0 00:08:38.121 00:08:38.121 00:08:38.121 Active Namespaces 00:08:38.121 ================= 00:08:38.121 Namespace ID:1 00:08:38.121 Error Recovery Timeout: Unlimited 00:08:38.121 Command Set Identifier: NVM (00h) 00:08:38.121 Deallocate: Supported 00:08:38.121 Deallocated/Unwritten Error: Supported 00:08:38.121 Deallocated Read Value: All 0x00 00:08:38.121 Deallocate in Write Zeroes: Not Supported 00:08:38.121 Deallocated Guard Field: 0xFFFF 00:08:38.121 Flush: Supported 00:08:38.121 Reservation: Not Supported 00:08:38.121 Namespace Sharing Capabilities: Multiple Controllers 00:08:38.121 Size (in LBAs): 262144 (1GiB) 00:08:38.121 Capacity (in LBAs): 262144 (1GiB) 00:08:38.121 Utilization (in LBAs): 262144 (1GiB) 00:08:38.121 Thin Provisioning: Not Supported 00:08:38.121 Per-NS Atomic Units: No 00:08:38.121 Maximum Single Source Range Length: 128 00:08:38.121 Maximum Copy Length: 128 00:08:38.121 Maximum Source Range Count: 128 00:08:38.121 NGUID/EUI64 Never Reused: No 00:08:38.121 Namespace Write Protected: No 00:08:38.121 Endurance group ID: 1 00:08:38.121 Number of LBA Formats: 8 00:08:38.121 Current LBA Format: LBA Format #04 00:08:38.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:38.121 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:38.121 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:38.121 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:38.121 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:38.121 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:38.121 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:38.121 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:38.121 00:08:38.121 Get Feature FDP: 00:08:38.121 ================ 00:08:38.121 Enabled: Yes 00:08:38.121 FDP configuration index: 0 00:08:38.121 00:08:38.121 FDP configurations log page 00:08:38.121 =========================== 00:08:38.121 Number of FDP configurations: 1 00:08:38.121 Version: 0 00:08:38.121 Size: 112 00:08:38.121 FDP Configuration Descriptor: 0 00:08:38.121 Descriptor Size: 96 00:08:38.121 Reclaim Group Identifier format: 2 00:08:38.121 FDP Volatile Write Cache: Not Present 00:08:38.121 FDP Configuration: Valid 00:08:38.121 Vendor Specific Size: 0 00:08:38.121 Number of Reclaim Groups: 2 00:08:38.121 Number of Recalim Unit Handles: 8 00:08:38.121 Max Placement Identifiers: 128 00:08:38.121 Number of Namespaces Suppprted: 256 00:08:38.121 Reclaim unit Nominal Size: 6000000 bytes 00:08:38.121 Estimated Reclaim Unit Time Limit: Not Reported 00:08:38.121 RUH Desc #000: RUH Type: Initially Isolated 00:08:38.121 RUH Desc #001: RUH Type: Initially Isolated 00:08:38.121 RUH Desc #002: RUH Type: Initially Isolated 00:08:38.121 RUH Desc #003: RUH Type: Initially Isolated 00:08:38.121 RUH Desc #004: RUH Type: Initially Isolated 00:08:38.121 RUH Desc #005: RUH Type: Initially Isolated 00:08:38.121 RUH Desc #006: RUH Type: Initially Isolated 00:08:38.121 RUH Desc #007: RUH Type: Initially Isolated 00:08:38.121 00:08:38.121 FDP reclaim unit handle usage log page 00:08:38.121 ====================================== 00:08:38.121 Number of Reclaim Unit Handles: 8 00:08:38.121 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:38.121 RUH Usage Desc #001: RUH Attributes: Unused 00:08:38.121 RUH Usage Desc #002: RUH Attributes: Unused 00:08:38.121 RUH Usage Desc #003: RUH Attributes: Unused 00:08:38.121 RUH Usage Desc #004: RUH Attributes: Unused 00:08:38.121 RUH Usage Desc #005: RUH Attributes: Unused 00:08:38.121 RUH Usage Desc #006: RUH Attributes: Unused 00:08:38.121 RUH Usage Desc #007: RUH Attributes: Unused 00:08:38.121 00:08:38.121 FDP statistics log page 00:08:38.121 ======================= 00:08:38.121 Host bytes with metadata written: 421044224 00:08:38.121 Media bytes with metadata written: 421089280 00:08:38.121 Media bytes erased: 0 00:08:38.121 00:08:38.121 FDP events log page 00:08:38.121 =================== 00:08:38.121 Number of FDP events: 0 00:08:38.121 00:08:38.121 NVM Specific Namespace Data 00:08:38.121 =========================== 00:08:38.121 Logical Block Storage Tag Mask: 0 00:08:38.121 Protection Information Capabilities: 00:08:38.121 16b Guard Protection Information Storage Tag Support: No 00:08:38.121 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:38.121 Storage Tag Check Read Support: No 00:08:38.121 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.121 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.121 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.121 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.121 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.121 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.121 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.121 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:38.121 00:08:38.121 real 0m1.108s 00:08:38.121 user 0m0.386s 00:08:38.122 sys 0m0.506s 00:08:38.122 17:42:58 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.122 ************************************ 00:08:38.122 END TEST nvme_identify 00:08:38.122 ************************************ 00:08:38.122 17:42:58 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:38.382 17:42:58 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:38.382 17:42:58 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:38.382 17:42:58 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.382 17:42:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.382 ************************************ 00:08:38.382 START TEST nvme_perf 00:08:38.382 ************************************ 00:08:38.382 17:42:58 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:08:38.382 17:42:58 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:39.759 Initializing NVMe Controllers 00:08:39.759 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.759 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.759 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.759 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.759 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:39.759 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:39.759 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:39.759 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:39.759 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:39.759 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:39.759 Initialization complete. Launching workers. 00:08:39.759 ======================================================== 00:08:39.759 Latency(us) 00:08:39.759 Device Information : IOPS MiB/s Average min max 00:08:39.759 PCIE (0000:00:10.0) NSID 1 from core 0: 18022.34 211.20 7104.56 4618.13 30783.29 00:08:39.759 PCIE (0000:00:11.0) NSID 1 from core 0: 18022.34 211.20 7099.87 4480.26 30634.82 00:08:39.759 PCIE (0000:00:13.0) NSID 1 from core 0: 18022.34 211.20 7093.88 3922.12 31605.65 00:08:39.759 PCIE (0000:00:12.0) NSID 1 from core 0: 18022.34 211.20 7087.96 3731.64 31303.15 00:08:39.759 PCIE (0000:00:12.0) NSID 2 from core 0: 18022.34 211.20 7081.68 3498.73 31291.56 00:08:39.759 PCIE (0000:00:12.0) NSID 3 from core 0: 18086.25 211.95 7050.83 3276.87 23990.68 00:08:39.759 ======================================================== 00:08:39.759 Total : 108197.93 1267.94 7086.44 3276.87 31605.65 00:08:39.759 00:08:39.759 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:39.759 ================================================================================= 00:08:39.759 1.00000% : 6024.271us 00:08:39.759 10.00000% : 6200.714us 00:08:39.759 25.00000% : 6377.157us 00:08:39.759 50.00000% : 6704.837us 00:08:39.759 75.00000% : 7007.311us 00:08:39.759 90.00000% : 8116.382us 00:08:39.759 95.00000% : 10183.286us 00:08:39.759 98.00000% : 12048.542us 00:08:39.759 99.00000% : 14619.569us 00:08:39.759 99.50000% : 21878.942us 00:08:39.759 99.90000% : 30650.683us 00:08:39.759 99.99000% : 30852.332us 00:08:39.759 99.99900% : 30852.332us 00:08:39.759 99.99990% : 30852.332us 00:08:39.759 99.99999% : 30852.332us 00:08:39.759 00:08:39.759 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:39.759 ================================================================================= 00:08:39.759 1.00000% : 6125.095us 00:08:39.759 10.00000% : 6251.126us 00:08:39.759 25.00000% : 6427.569us 00:08:39.759 50.00000% : 6654.425us 00:08:39.759 75.00000% : 6906.486us 00:08:39.759 90.00000% : 8166.794us 00:08:39.759 95.00000% : 10233.698us 00:08:39.759 98.00000% : 11846.892us 00:08:39.759 99.00000% : 14619.569us 00:08:39.759 99.50000% : 21677.292us 00:08:39.759 99.90000% : 30449.034us 00:08:39.759 99.99000% : 30650.683us 00:08:39.759 99.99900% : 30650.683us 00:08:39.759 99.99990% : 30650.683us 00:08:39.759 99.99999% : 30650.683us 00:08:39.759 00:08:39.759 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:39.759 ================================================================================= 00:08:39.759 1.00000% : 6099.889us 00:08:39.759 10.00000% : 6251.126us 00:08:39.759 25.00000% : 6427.569us 00:08:39.759 50.00000% : 6704.837us 00:08:39.759 75.00000% : 6906.486us 00:08:39.759 90.00000% : 8065.969us 00:08:39.759 95.00000% : 10132.874us 00:08:39.759 98.00000% : 11897.305us 00:08:39.759 99.00000% : 14417.920us 00:08:39.759 99.50000% : 22080.591us 00:08:39.759 99.90000% : 31457.280us 00:08:39.759 99.99000% : 31658.929us 00:08:39.759 99.99900% : 31658.929us 00:08:39.759 99.99990% : 31658.929us 00:08:39.759 99.99999% : 31658.929us 00:08:39.759 00:08:39.759 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:39.759 ================================================================================= 00:08:39.759 1.00000% : 6074.683us 00:08:39.759 10.00000% : 6251.126us 00:08:39.759 25.00000% : 6427.569us 00:08:39.759 50.00000% : 6704.837us 00:08:39.759 75.00000% : 6906.486us 00:08:39.759 90.00000% : 8217.206us 00:08:39.759 95.00000% : 9981.637us 00:08:39.759 98.00000% : 11897.305us 00:08:39.759 99.00000% : 14317.095us 00:08:39.759 99.50000% : 22786.363us 00:08:39.759 99.90000% : 31053.982us 00:08:39.759 99.99000% : 31457.280us 00:08:39.759 99.99900% : 31457.280us 00:08:39.759 99.99990% : 31457.280us 00:08:39.759 99.99999% : 31457.280us 00:08:39.759 00:08:39.759 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:39.759 ================================================================================= 00:08:39.759 1.00000% : 6099.889us 00:08:39.759 10.00000% : 6251.126us 00:08:39.759 25.00000% : 6427.569us 00:08:39.759 50.00000% : 6654.425us 00:08:39.760 75.00000% : 6906.486us 00:08:39.760 90.00000% : 8267.618us 00:08:39.760 95.00000% : 10032.049us 00:08:39.760 98.00000% : 11594.831us 00:08:39.760 99.00000% : 14317.095us 00:08:39.760 99.50000% : 23088.837us 00:08:39.760 99.90000% : 31053.982us 00:08:39.760 99.99000% : 31457.280us 00:08:39.760 99.99900% : 31457.280us 00:08:39.760 99.99990% : 31457.280us 00:08:39.760 99.99999% : 31457.280us 00:08:39.760 00:08:39.760 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:39.760 ================================================================================= 00:08:39.760 1.00000% : 6099.889us 00:08:39.760 10.00000% : 6251.126us 00:08:39.760 25.00000% : 6427.569us 00:08:39.760 50.00000% : 6704.837us 00:08:39.760 75.00000% : 6906.486us 00:08:39.760 90.00000% : 8267.618us 00:08:39.760 95.00000% : 10183.286us 00:08:39.760 98.00000% : 11998.129us 00:08:39.760 99.00000% : 14619.569us 00:08:39.760 99.50000% : 15728.640us 00:08:39.760 99.90000% : 23794.609us 00:08:39.760 99.99000% : 23996.258us 00:08:39.760 99.99900% : 23996.258us 00:08:39.760 99.99990% : 23996.258us 00:08:39.760 99.99999% : 23996.258us 00:08:39.760 00:08:39.760 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:39.760 ============================================================================== 00:08:39.760 Range in us Cumulative IO count 00:08:39.760 4612.726 - 4637.932: 0.0111% ( 2) 00:08:39.760 4637.932 - 4663.138: 0.0277% ( 3) 00:08:39.760 4663.138 - 4688.345: 0.0554% ( 5) 00:08:39.760 4688.345 - 4713.551: 0.0609% ( 1) 00:08:39.760 4713.551 - 4738.757: 0.0720% ( 2) 00:08:39.760 4738.757 - 4763.963: 0.0776% ( 1) 00:08:39.760 4763.963 - 4789.169: 0.0831% ( 1) 00:08:39.760 4789.169 - 4814.375: 0.1053% ( 4) 00:08:39.760 4814.375 - 4839.582: 0.1108% ( 1) 00:08:39.760 4839.582 - 4864.788: 0.1219% ( 2) 00:08:39.760 4864.788 - 4889.994: 0.1330% ( 2) 00:08:39.760 4889.994 - 4915.200: 0.1441% ( 2) 00:08:39.760 4915.200 - 4940.406: 0.1551% ( 2) 00:08:39.760 4940.406 - 4965.612: 0.1662% ( 2) 00:08:39.760 4965.612 - 4990.818: 0.1773% ( 2) 00:08:39.760 4990.818 - 5016.025: 0.1884% ( 2) 00:08:39.760 5016.025 - 5041.231: 0.1939% ( 1) 00:08:39.760 5041.231 - 5066.437: 0.2161% ( 4) 00:08:39.760 5066.437 - 5091.643: 0.2216% ( 1) 00:08:39.760 5091.643 - 5116.849: 0.2327% ( 2) 00:08:39.760 5116.849 - 5142.055: 0.2493% ( 3) 00:08:39.760 5142.055 - 5167.262: 0.2549% ( 1) 00:08:39.760 5167.262 - 5192.468: 0.2660% ( 2) 00:08:39.760 5192.468 - 5217.674: 0.2826% ( 3) 00:08:39.760 5217.674 - 5242.880: 0.2937% ( 2) 00:08:39.760 5242.880 - 5268.086: 0.3047% ( 2) 00:08:39.760 5268.086 - 5293.292: 0.3103% ( 1) 00:08:39.760 5293.292 - 5318.498: 0.3269% ( 3) 00:08:39.760 5318.498 - 5343.705: 0.3380% ( 2) 00:08:39.760 5343.705 - 5368.911: 0.3435% ( 1) 00:08:39.760 5368.911 - 5394.117: 0.3546% ( 2) 00:08:39.760 5898.240 - 5923.446: 0.3823% ( 5) 00:08:39.760 5923.446 - 5948.652: 0.4433% ( 11) 00:08:39.760 5948.652 - 5973.858: 0.5264% ( 15) 00:08:39.760 5973.858 - 5999.065: 0.6427% ( 21) 00:08:39.760 5999.065 - 6024.271: 1.0084% ( 66) 00:08:39.760 6024.271 - 6049.477: 1.5071% ( 90) 00:08:39.760 6049.477 - 6074.683: 2.6485% ( 206) 00:08:39.760 6074.683 - 6099.889: 4.0448% ( 252) 00:08:39.760 6099.889 - 6125.095: 5.7957% ( 316) 00:08:39.760 6125.095 - 6150.302: 7.7294% ( 349) 00:08:39.760 6150.302 - 6175.508: 9.4969% ( 319) 00:08:39.760 6175.508 - 6200.714: 11.4694% ( 356) 00:08:39.760 6200.714 - 6225.920: 13.4641% ( 360) 00:08:39.760 6225.920 - 6251.126: 15.1928% ( 312) 00:08:39.760 6251.126 - 6276.332: 17.2595% ( 373) 00:08:39.760 6276.332 - 6301.538: 19.2819% ( 365) 00:08:39.760 6301.538 - 6326.745: 21.2655% ( 358) 00:08:39.760 6326.745 - 6351.951: 23.2713% ( 362) 00:08:39.760 6351.951 - 6377.157: 25.4045% ( 385) 00:08:39.760 6377.157 - 6402.363: 27.3770% ( 356) 00:08:39.760 6402.363 - 6427.569: 29.5213% ( 387) 00:08:39.760 6427.569 - 6452.775: 31.5935% ( 374) 00:08:39.760 6452.775 - 6503.188: 35.8987% ( 777) 00:08:39.760 6503.188 - 6553.600: 40.1319% ( 764) 00:08:39.760 6553.600 - 6604.012: 44.5423% ( 796) 00:08:39.760 6604.012 - 6654.425: 48.7977% ( 768) 00:08:39.760 6654.425 - 6704.837: 53.1195% ( 780) 00:08:39.760 6704.837 - 6755.249: 57.5022% ( 791) 00:08:39.760 6755.249 - 6805.662: 61.8296% ( 781) 00:08:39.760 6805.662 - 6856.074: 66.1348% ( 777) 00:08:39.760 6856.074 - 6906.486: 70.4122% ( 772) 00:08:39.760 6906.486 - 6956.898: 74.6232% ( 760) 00:08:39.760 6956.898 - 7007.311: 78.6735% ( 731) 00:08:39.760 7007.311 - 7057.723: 81.7764% ( 560) 00:08:39.760 7057.723 - 7108.135: 83.9096% ( 385) 00:08:39.760 7108.135 - 7158.548: 85.2449% ( 241) 00:08:39.760 7158.548 - 7208.960: 86.0372% ( 143) 00:08:39.760 7208.960 - 7259.372: 86.7132% ( 122) 00:08:39.760 7259.372 - 7309.785: 87.1288% ( 75) 00:08:39.760 7309.785 - 7360.197: 87.4169% ( 52) 00:08:39.760 7360.197 - 7410.609: 87.6939% ( 50) 00:08:39.760 7410.609 - 7461.022: 87.9211% ( 41) 00:08:39.760 7461.022 - 7511.434: 88.1372% ( 39) 00:08:39.760 7511.434 - 7561.846: 88.3090% ( 31) 00:08:39.760 7561.846 - 7612.258: 88.5029% ( 35) 00:08:39.760 7612.258 - 7662.671: 88.6968% ( 35) 00:08:39.760 7662.671 - 7713.083: 88.8298% ( 24) 00:08:39.760 7713.083 - 7763.495: 88.9849% ( 28) 00:08:39.760 7763.495 - 7813.908: 89.1401% ( 28) 00:08:39.760 7813.908 - 7864.320: 89.2675% ( 23) 00:08:39.760 7864.320 - 7914.732: 89.4393% ( 31) 00:08:39.760 7914.732 - 7965.145: 89.5944% ( 28) 00:08:39.760 7965.145 - 8015.557: 89.7440% ( 27) 00:08:39.760 8015.557 - 8065.969: 89.8548% ( 20) 00:08:39.760 8065.969 - 8116.382: 90.0044% ( 27) 00:08:39.760 8116.382 - 8166.794: 90.1263% ( 22) 00:08:39.760 8166.794 - 8217.206: 90.2648% ( 25) 00:08:39.760 8217.206 - 8267.618: 90.4089% ( 26) 00:08:39.760 8267.618 - 8318.031: 90.5142% ( 19) 00:08:39.760 8318.031 - 8368.443: 90.6305% ( 21) 00:08:39.760 8368.443 - 8418.855: 90.7192% ( 16) 00:08:39.760 8418.855 - 8469.268: 90.8023% ( 15) 00:08:39.760 8469.268 - 8519.680: 90.8799% ( 14) 00:08:39.760 8519.680 - 8570.092: 90.9852% ( 19) 00:08:39.760 8570.092 - 8620.505: 91.0683% ( 15) 00:08:39.760 8620.505 - 8670.917: 91.1735% ( 19) 00:08:39.760 8670.917 - 8721.329: 91.2788% ( 19) 00:08:39.760 8721.329 - 8771.742: 91.3564% ( 14) 00:08:39.760 8771.742 - 8822.154: 91.4506% ( 17) 00:08:39.760 8822.154 - 8872.566: 91.5669% ( 21) 00:08:39.760 8872.566 - 8922.978: 91.6833% ( 21) 00:08:39.760 8922.978 - 8973.391: 91.7276% ( 8) 00:08:39.760 8973.391 - 9023.803: 91.8384% ( 20) 00:08:39.760 9023.803 - 9074.215: 91.9382% ( 18) 00:08:39.760 9074.215 - 9124.628: 92.0656% ( 23) 00:08:39.760 9124.628 - 9175.040: 92.1598% ( 17) 00:08:39.760 9175.040 - 9225.452: 92.2651% ( 19) 00:08:39.760 9225.452 - 9275.865: 92.3759% ( 20) 00:08:39.760 9275.865 - 9326.277: 92.4812% ( 19) 00:08:39.760 9326.277 - 9376.689: 92.6086% ( 23) 00:08:39.760 9376.689 - 9427.102: 92.7250% ( 21) 00:08:39.760 9427.102 - 9477.514: 92.8302% ( 19) 00:08:39.760 9477.514 - 9527.926: 92.9798% ( 27) 00:08:39.760 9527.926 - 9578.338: 93.0962% ( 21) 00:08:39.760 9578.338 - 9628.751: 93.2735% ( 32) 00:08:39.760 9628.751 - 9679.163: 93.3788% ( 19) 00:08:39.760 9679.163 - 9729.575: 93.5616% ( 33) 00:08:39.760 9729.575 - 9779.988: 93.7334% ( 31) 00:08:39.760 9779.988 - 9830.400: 93.8664% ( 24) 00:08:39.760 9830.400 - 9880.812: 94.0215% ( 28) 00:08:39.760 9880.812 - 9931.225: 94.1822% ( 29) 00:08:39.760 9931.225 - 9981.637: 94.3373% ( 28) 00:08:39.760 9981.637 - 10032.049: 94.4925% ( 28) 00:08:39.760 10032.049 - 10082.462: 94.6421% ( 27) 00:08:39.760 10082.462 - 10132.874: 94.8027% ( 29) 00:08:39.760 10132.874 - 10183.286: 95.0188% ( 39) 00:08:39.760 10183.286 - 10233.698: 95.1961% ( 32) 00:08:39.760 10233.698 - 10284.111: 95.3624% ( 30) 00:08:39.760 10284.111 - 10334.523: 95.5230% ( 29) 00:08:39.760 10334.523 - 10384.935: 95.7114% ( 34) 00:08:39.760 10384.935 - 10435.348: 95.8998% ( 34) 00:08:39.760 10435.348 - 10485.760: 96.0550% ( 28) 00:08:39.760 10485.760 - 10536.172: 96.2156% ( 29) 00:08:39.760 10536.172 - 10586.585: 96.3819% ( 30) 00:08:39.760 10586.585 - 10636.997: 96.5204% ( 25) 00:08:39.760 10636.997 - 10687.409: 96.6755% ( 28) 00:08:39.760 10687.409 - 10737.822: 96.7974% ( 22) 00:08:39.760 10737.822 - 10788.234: 96.9027% ( 19) 00:08:39.760 10788.234 - 10838.646: 97.0080% ( 19) 00:08:39.760 10838.646 - 10889.058: 97.0966% ( 16) 00:08:39.760 10889.058 - 10939.471: 97.1742% ( 14) 00:08:39.760 10939.471 - 10989.883: 97.2795% ( 19) 00:08:39.760 10989.883 - 11040.295: 97.3349% ( 10) 00:08:39.760 11040.295 - 11090.708: 97.3848% ( 9) 00:08:39.760 11090.708 - 11141.120: 97.4180% ( 6) 00:08:39.760 11141.120 - 11191.532: 97.4346% ( 3) 00:08:39.760 11191.532 - 11241.945: 97.4457% ( 2) 00:08:39.760 11241.945 - 11292.357: 97.4734% ( 5) 00:08:39.760 11292.357 - 11342.769: 97.5066% ( 6) 00:08:39.760 11342.769 - 11393.182: 97.5399% ( 6) 00:08:39.760 11393.182 - 11443.594: 97.5731% ( 6) 00:08:39.760 11443.594 - 11494.006: 97.5898% ( 3) 00:08:39.760 11494.006 - 11544.418: 97.6341% ( 8) 00:08:39.760 11544.418 - 11594.831: 97.6729% ( 7) 00:08:39.760 11594.831 - 11645.243: 97.6950% ( 4) 00:08:39.760 11645.243 - 11695.655: 97.7227% ( 5) 00:08:39.760 11695.655 - 11746.068: 97.7615% ( 7) 00:08:39.760 11746.068 - 11796.480: 97.7837% ( 4) 00:08:39.760 11796.480 - 11846.892: 97.8336% ( 9) 00:08:39.760 11846.892 - 11897.305: 97.8668% ( 6) 00:08:39.761 11897.305 - 11947.717: 97.9277% ( 11) 00:08:39.761 11947.717 - 11998.129: 97.9721% ( 8) 00:08:39.761 11998.129 - 12048.542: 98.0053% ( 6) 00:08:39.761 12048.542 - 12098.954: 98.0496% ( 8) 00:08:39.761 12098.954 - 12149.366: 98.0995% ( 9) 00:08:39.761 12149.366 - 12199.778: 98.1438% ( 8) 00:08:39.761 12199.778 - 12250.191: 98.1882% ( 8) 00:08:39.761 12250.191 - 12300.603: 98.2380% ( 9) 00:08:39.761 12300.603 - 12351.015: 98.2768% ( 7) 00:08:39.761 12351.015 - 12401.428: 98.3045% ( 5) 00:08:39.761 12401.428 - 12451.840: 98.3655% ( 11) 00:08:39.761 12451.840 - 12502.252: 98.3876% ( 4) 00:08:39.761 12502.252 - 12552.665: 98.3987% ( 2) 00:08:39.761 12552.665 - 12603.077: 98.4486% ( 9) 00:08:39.761 12603.077 - 12653.489: 98.4707% ( 4) 00:08:39.761 12653.489 - 12703.902: 98.4763% ( 1) 00:08:39.761 12703.902 - 12754.314: 98.5040% ( 5) 00:08:39.761 12754.314 - 12804.726: 98.5095% ( 1) 00:08:39.761 12804.726 - 12855.138: 98.5206% ( 2) 00:08:39.761 12855.138 - 12905.551: 98.5428% ( 4) 00:08:39.761 12905.551 - 13006.375: 98.5760% ( 6) 00:08:39.761 13006.375 - 13107.200: 98.5816% ( 1) 00:08:39.761 13913.797 - 14014.622: 98.6480% ( 12) 00:08:39.761 14014.622 - 14115.446: 98.7145% ( 12) 00:08:39.761 14115.446 - 14216.271: 98.7699% ( 10) 00:08:39.761 14216.271 - 14317.095: 98.8198% ( 9) 00:08:39.761 14317.095 - 14417.920: 98.8808% ( 11) 00:08:39.761 14417.920 - 14518.745: 98.9417% ( 11) 00:08:39.761 14518.745 - 14619.569: 99.0027% ( 11) 00:08:39.761 14619.569 - 14720.394: 99.0581% ( 10) 00:08:39.761 14720.394 - 14821.218: 99.1190% ( 11) 00:08:39.761 14821.218 - 14922.043: 99.1855% ( 12) 00:08:39.761 14922.043 - 15022.868: 99.2354% ( 9) 00:08:39.761 15022.868 - 15123.692: 99.2520% ( 3) 00:08:39.761 15123.692 - 15224.517: 99.2852% ( 6) 00:08:39.761 15224.517 - 15325.342: 99.2908% ( 1) 00:08:39.761 21072.345 - 21173.169: 99.2963% ( 1) 00:08:39.761 21173.169 - 21273.994: 99.3185% ( 4) 00:08:39.761 21273.994 - 21374.818: 99.3462% ( 5) 00:08:39.761 21374.818 - 21475.643: 99.3850% ( 7) 00:08:39.761 21475.643 - 21576.468: 99.4127% ( 5) 00:08:39.761 21576.468 - 21677.292: 99.4570% ( 8) 00:08:39.761 21677.292 - 21778.117: 99.4792% ( 4) 00:08:39.761 21778.117 - 21878.942: 99.5180% ( 7) 00:08:39.761 21878.942 - 21979.766: 99.5401% ( 4) 00:08:39.761 21979.766 - 22080.591: 99.5789% ( 7) 00:08:39.761 22080.591 - 22181.415: 99.5955% ( 3) 00:08:39.761 22181.415 - 22282.240: 99.6454% ( 9) 00:08:39.761 29440.788 - 29642.437: 99.6509% ( 1) 00:08:39.761 29642.437 - 29844.086: 99.6953% ( 8) 00:08:39.761 29844.086 - 30045.735: 99.7617% ( 12) 00:08:39.761 30045.735 - 30247.385: 99.8338% ( 13) 00:08:39.761 30247.385 - 30449.034: 99.8947% ( 11) 00:08:39.761 30449.034 - 30650.683: 99.9612% ( 12) 00:08:39.761 30650.683 - 30852.332: 100.0000% ( 7) 00:08:39.761 00:08:39.761 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:39.761 ============================================================================== 00:08:39.761 Range in us Cumulative IO count 00:08:39.761 4461.489 - 4486.695: 0.0111% ( 2) 00:08:39.761 4486.695 - 4511.902: 0.0332% ( 4) 00:08:39.761 4511.902 - 4537.108: 0.0443% ( 2) 00:08:39.761 4537.108 - 4562.314: 0.0554% ( 2) 00:08:39.761 4562.314 - 4587.520: 0.0665% ( 2) 00:08:39.761 4587.520 - 4612.726: 0.0831% ( 3) 00:08:39.761 4612.726 - 4637.932: 0.0942% ( 2) 00:08:39.761 4637.932 - 4663.138: 0.1053% ( 2) 00:08:39.761 4663.138 - 4688.345: 0.1164% ( 2) 00:08:39.761 4688.345 - 4713.551: 0.1330% ( 3) 00:08:39.761 4713.551 - 4738.757: 0.1441% ( 2) 00:08:39.761 4738.757 - 4763.963: 0.1551% ( 2) 00:08:39.761 4763.963 - 4789.169: 0.1718% ( 3) 00:08:39.761 4789.169 - 4814.375: 0.1828% ( 2) 00:08:39.761 4814.375 - 4839.582: 0.1939% ( 2) 00:08:39.761 4839.582 - 4864.788: 0.2105% ( 3) 00:08:39.761 4864.788 - 4889.994: 0.2216% ( 2) 00:08:39.761 4889.994 - 4915.200: 0.2383% ( 3) 00:08:39.761 4915.200 - 4940.406: 0.2493% ( 2) 00:08:39.761 4940.406 - 4965.612: 0.2660% ( 3) 00:08:39.761 4965.612 - 4990.818: 0.2770% ( 2) 00:08:39.761 4990.818 - 5016.025: 0.2881% ( 2) 00:08:39.761 5016.025 - 5041.231: 0.3047% ( 3) 00:08:39.761 5041.231 - 5066.437: 0.3158% ( 2) 00:08:39.761 5066.437 - 5091.643: 0.3324% ( 3) 00:08:39.761 5091.643 - 5116.849: 0.3435% ( 2) 00:08:39.761 5116.849 - 5142.055: 0.3546% ( 2) 00:08:39.761 5973.858 - 5999.065: 0.3712% ( 3) 00:08:39.761 5999.065 - 6024.271: 0.3823% ( 2) 00:08:39.761 6024.271 - 6049.477: 0.4543% ( 13) 00:08:39.761 6049.477 - 6074.683: 0.6261% ( 31) 00:08:39.761 6074.683 - 6099.889: 0.9586% ( 60) 00:08:39.761 6099.889 - 6125.095: 1.5459% ( 106) 00:08:39.761 6125.095 - 6150.302: 2.5321% ( 178) 00:08:39.761 6150.302 - 6175.508: 4.1777% ( 297) 00:08:39.761 6175.508 - 6200.714: 6.0616% ( 340) 00:08:39.761 6200.714 - 6225.920: 8.3998% ( 422) 00:08:39.761 6225.920 - 6251.126: 10.7159% ( 418) 00:08:39.761 6251.126 - 6276.332: 12.9211% ( 398) 00:08:39.761 6276.332 - 6301.538: 15.1319% ( 399) 00:08:39.761 6301.538 - 6326.745: 17.5310% ( 433) 00:08:39.761 6326.745 - 6351.951: 19.9579% ( 438) 00:08:39.761 6351.951 - 6377.157: 22.3460% ( 431) 00:08:39.761 6377.157 - 6402.363: 24.7174% ( 428) 00:08:39.761 6402.363 - 6427.569: 27.1775% ( 444) 00:08:39.761 6427.569 - 6452.775: 29.7484% ( 464) 00:08:39.761 6452.775 - 6503.188: 34.7185% ( 897) 00:08:39.761 6503.188 - 6553.600: 39.7939% ( 916) 00:08:39.761 6553.600 - 6604.012: 44.8582% ( 914) 00:08:39.761 6604.012 - 6654.425: 50.0388% ( 935) 00:08:39.761 6654.425 - 6704.837: 55.0975% ( 913) 00:08:39.761 6704.837 - 6755.249: 60.2449% ( 929) 00:08:39.761 6755.249 - 6805.662: 65.3313% ( 918) 00:08:39.761 6805.662 - 6856.074: 70.3568% ( 907) 00:08:39.761 6856.074 - 6906.486: 75.1385% ( 863) 00:08:39.761 6906.486 - 6956.898: 79.4437% ( 777) 00:08:39.761 6956.898 - 7007.311: 82.4413% ( 541) 00:08:39.761 7007.311 - 7057.723: 84.1922% ( 316) 00:08:39.761 7057.723 - 7108.135: 85.3723% ( 213) 00:08:39.761 7108.135 - 7158.548: 86.1813% ( 146) 00:08:39.761 7158.548 - 7208.960: 86.7188% ( 97) 00:08:39.761 7208.960 - 7259.372: 87.1232% ( 73) 00:08:39.761 7259.372 - 7309.785: 87.4113% ( 52) 00:08:39.761 7309.785 - 7360.197: 87.6385% ( 41) 00:08:39.761 7360.197 - 7410.609: 87.8103% ( 31) 00:08:39.761 7410.609 - 7461.022: 87.9654% ( 28) 00:08:39.761 7461.022 - 7511.434: 88.1095% ( 26) 00:08:39.761 7511.434 - 7561.846: 88.2812% ( 31) 00:08:39.761 7561.846 - 7612.258: 88.4198% ( 25) 00:08:39.761 7612.258 - 7662.671: 88.5694% ( 27) 00:08:39.761 7662.671 - 7713.083: 88.7245% ( 28) 00:08:39.761 7713.083 - 7763.495: 88.8907% ( 30) 00:08:39.761 7763.495 - 7813.908: 89.0403% ( 27) 00:08:39.761 7813.908 - 7864.320: 89.2010% ( 29) 00:08:39.761 7864.320 - 7914.732: 89.3672% ( 30) 00:08:39.761 7914.732 - 7965.145: 89.5113% ( 26) 00:08:39.761 7965.145 - 8015.557: 89.6664% ( 28) 00:08:39.761 8015.557 - 8065.969: 89.8105% ( 26) 00:08:39.761 8065.969 - 8116.382: 89.9490% ( 25) 00:08:39.761 8116.382 - 8166.794: 90.0931% ( 26) 00:08:39.761 8166.794 - 8217.206: 90.2371% ( 26) 00:08:39.761 8217.206 - 8267.618: 90.3867% ( 27) 00:08:39.761 8267.618 - 8318.031: 90.5086% ( 22) 00:08:39.761 8318.031 - 8368.443: 90.6139% ( 19) 00:08:39.761 8368.443 - 8418.855: 90.7192% ( 19) 00:08:39.761 8418.855 - 8469.268: 90.8411% ( 22) 00:08:39.761 8469.268 - 8519.680: 90.9519% ( 20) 00:08:39.761 8519.680 - 8570.092: 91.0904% ( 25) 00:08:39.761 8570.092 - 8620.505: 91.1846% ( 17) 00:08:39.761 8620.505 - 8670.917: 91.2733% ( 16) 00:08:39.761 8670.917 - 8721.329: 91.3619% ( 16) 00:08:39.761 8721.329 - 8771.742: 91.4617% ( 18) 00:08:39.761 8771.742 - 8822.154: 91.5448% ( 15) 00:08:39.761 8822.154 - 8872.566: 91.6057% ( 11) 00:08:39.761 8872.566 - 8922.978: 91.6667% ( 11) 00:08:39.761 8922.978 - 8973.391: 91.7498% ( 15) 00:08:39.761 8973.391 - 9023.803: 91.8661% ( 21) 00:08:39.761 9023.803 - 9074.215: 91.9880% ( 22) 00:08:39.761 9074.215 - 9124.628: 92.1432% ( 28) 00:08:39.761 9124.628 - 9175.040: 92.3039% ( 29) 00:08:39.761 9175.040 - 9225.452: 92.4313% ( 23) 00:08:39.761 9225.452 - 9275.865: 92.5643% ( 24) 00:08:39.761 9275.865 - 9326.277: 92.6973% ( 24) 00:08:39.761 9326.277 - 9376.689: 92.8136% ( 21) 00:08:39.761 9376.689 - 9427.102: 92.9854% ( 31) 00:08:39.761 9427.102 - 9477.514: 93.1461% ( 29) 00:08:39.761 9477.514 - 9527.926: 93.2957% ( 27) 00:08:39.761 9527.926 - 9578.338: 93.4453% ( 27) 00:08:39.761 9578.338 - 9628.751: 93.5949% ( 27) 00:08:39.761 9628.751 - 9679.163: 93.7500% ( 28) 00:08:39.761 9679.163 - 9729.575: 93.9107% ( 29) 00:08:39.761 9729.575 - 9779.988: 94.0437% ( 24) 00:08:39.761 9779.988 - 9830.400: 94.1711% ( 23) 00:08:39.761 9830.400 - 9880.812: 94.2542% ( 15) 00:08:39.761 9880.812 - 9931.225: 94.3429% ( 16) 00:08:39.761 9931.225 - 9981.637: 94.4315% ( 16) 00:08:39.762 9981.637 - 10032.049: 94.5312% ( 18) 00:08:39.762 10032.049 - 10082.462: 94.6476% ( 21) 00:08:39.762 10082.462 - 10132.874: 94.7972% ( 27) 00:08:39.762 10132.874 - 10183.286: 94.9302% ( 24) 00:08:39.762 10183.286 - 10233.698: 95.0521% ( 22) 00:08:39.762 10233.698 - 10284.111: 95.2516% ( 36) 00:08:39.762 10284.111 - 10334.523: 95.4233% ( 31) 00:08:39.762 10334.523 - 10384.935: 95.5951% ( 31) 00:08:39.762 10384.935 - 10435.348: 95.7170% ( 22) 00:08:39.762 10435.348 - 10485.760: 95.8666% ( 27) 00:08:39.762 10485.760 - 10536.172: 96.0051% ( 25) 00:08:39.762 10536.172 - 10586.585: 96.1713% ( 30) 00:08:39.762 10586.585 - 10636.997: 96.3098% ( 25) 00:08:39.762 10636.997 - 10687.409: 96.4594% ( 27) 00:08:39.762 10687.409 - 10737.822: 96.6146% ( 28) 00:08:39.762 10737.822 - 10788.234: 96.7420% ( 23) 00:08:39.762 10788.234 - 10838.646: 96.8972% ( 28) 00:08:39.762 10838.646 - 10889.058: 97.0301% ( 24) 00:08:39.762 10889.058 - 10939.471: 97.1631% ( 24) 00:08:39.762 10939.471 - 10989.883: 97.2684% ( 19) 00:08:39.762 10989.883 - 11040.295: 97.3681% ( 18) 00:08:39.762 11040.295 - 11090.708: 97.4402% ( 13) 00:08:39.762 11090.708 - 11141.120: 97.4956% ( 10) 00:08:39.762 11141.120 - 11191.532: 97.5510% ( 10) 00:08:39.762 11191.532 - 11241.945: 97.5898% ( 7) 00:08:39.762 11241.945 - 11292.357: 97.6341% ( 8) 00:08:39.762 11292.357 - 11342.769: 97.6784% ( 8) 00:08:39.762 11342.769 - 11393.182: 97.7283% ( 9) 00:08:39.762 11393.182 - 11443.594: 97.7726% ( 8) 00:08:39.762 11443.594 - 11494.006: 97.8059% ( 6) 00:08:39.762 11494.006 - 11544.418: 97.8391% ( 6) 00:08:39.762 11544.418 - 11594.831: 97.8723% ( 6) 00:08:39.762 11594.831 - 11645.243: 97.9111% ( 7) 00:08:39.762 11645.243 - 11695.655: 97.9388% ( 5) 00:08:39.762 11695.655 - 11746.068: 97.9721% ( 6) 00:08:39.762 11746.068 - 11796.480: 97.9998% ( 5) 00:08:39.762 11796.480 - 11846.892: 98.0330% ( 6) 00:08:39.762 11846.892 - 11897.305: 98.0663% ( 6) 00:08:39.762 11897.305 - 11947.717: 98.0773% ( 2) 00:08:39.762 11947.717 - 11998.129: 98.0940% ( 3) 00:08:39.762 11998.129 - 12048.542: 98.1106% ( 3) 00:08:39.762 12048.542 - 12098.954: 98.1494% ( 7) 00:08:39.762 12098.954 - 12149.366: 98.1826% ( 6) 00:08:39.762 12149.366 - 12199.778: 98.2159% ( 6) 00:08:39.762 12199.778 - 12250.191: 98.2436% ( 5) 00:08:39.762 12250.191 - 12300.603: 98.2879% ( 8) 00:08:39.762 12300.603 - 12351.015: 98.3211% ( 6) 00:08:39.762 12351.015 - 12401.428: 98.3544% ( 6) 00:08:39.762 12401.428 - 12451.840: 98.3710% ( 3) 00:08:39.762 12451.840 - 12502.252: 98.3821% ( 2) 00:08:39.762 12502.252 - 12552.665: 98.3987% ( 3) 00:08:39.762 12552.665 - 12603.077: 98.4153% ( 3) 00:08:39.762 12603.077 - 12653.489: 98.4375% ( 4) 00:08:39.762 12653.489 - 12703.902: 98.4541% ( 3) 00:08:39.762 12703.902 - 12754.314: 98.4763% ( 4) 00:08:39.762 12754.314 - 12804.726: 98.4929% ( 3) 00:08:39.762 12804.726 - 12855.138: 98.5095% ( 3) 00:08:39.762 12855.138 - 12905.551: 98.5317% ( 4) 00:08:39.762 12905.551 - 13006.375: 98.5649% ( 6) 00:08:39.762 13006.375 - 13107.200: 98.5816% ( 3) 00:08:39.762 13712.148 - 13812.972: 98.6037% ( 4) 00:08:39.762 13812.972 - 13913.797: 98.6425% ( 7) 00:08:39.762 13913.797 - 14014.622: 98.6813% ( 7) 00:08:39.762 14014.622 - 14115.446: 98.7201% ( 7) 00:08:39.762 14115.446 - 14216.271: 98.7589% ( 7) 00:08:39.762 14216.271 - 14317.095: 98.7977% ( 7) 00:08:39.762 14317.095 - 14417.920: 98.8641% ( 12) 00:08:39.762 14417.920 - 14518.745: 98.9362% ( 13) 00:08:39.762 14518.745 - 14619.569: 99.0193% ( 15) 00:08:39.762 14619.569 - 14720.394: 99.0691% ( 9) 00:08:39.762 14720.394 - 14821.218: 99.1024% ( 6) 00:08:39.762 14821.218 - 14922.043: 99.1356% ( 6) 00:08:39.762 14922.043 - 15022.868: 99.1689% ( 6) 00:08:39.762 15022.868 - 15123.692: 99.2077% ( 7) 00:08:39.762 15123.692 - 15224.517: 99.2409% ( 6) 00:08:39.762 15224.517 - 15325.342: 99.2742% ( 6) 00:08:39.762 15325.342 - 15426.166: 99.2908% ( 3) 00:08:39.762 20971.520 - 21072.345: 99.3074% ( 3) 00:08:39.762 21072.345 - 21173.169: 99.3406% ( 6) 00:08:39.762 21173.169 - 21273.994: 99.3794% ( 7) 00:08:39.762 21273.994 - 21374.818: 99.4127% ( 6) 00:08:39.762 21374.818 - 21475.643: 99.4459% ( 6) 00:08:39.762 21475.643 - 21576.468: 99.4847% ( 7) 00:08:39.762 21576.468 - 21677.292: 99.5124% ( 5) 00:08:39.762 21677.292 - 21778.117: 99.5457% ( 6) 00:08:39.762 21778.117 - 21878.942: 99.5844% ( 7) 00:08:39.762 21878.942 - 21979.766: 99.6232% ( 7) 00:08:39.762 21979.766 - 22080.591: 99.6454% ( 4) 00:08:39.762 29440.788 - 29642.437: 99.6676% ( 4) 00:08:39.762 29642.437 - 29844.086: 99.7285% ( 11) 00:08:39.762 29844.086 - 30045.735: 99.7950% ( 12) 00:08:39.762 30045.735 - 30247.385: 99.8559% ( 11) 00:08:39.762 30247.385 - 30449.034: 99.9335% ( 14) 00:08:39.762 30449.034 - 30650.683: 100.0000% ( 12) 00:08:39.762 00:08:39.762 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:39.762 ============================================================================== 00:08:39.762 Range in us Cumulative IO count 00:08:39.762 3906.954 - 3932.160: 0.0166% ( 3) 00:08:39.762 3932.160 - 3957.366: 0.0665% ( 9) 00:08:39.762 3957.366 - 3982.572: 0.0776% ( 2) 00:08:39.762 3982.572 - 4007.778: 0.0831% ( 1) 00:08:39.762 4007.778 - 4032.985: 0.0887% ( 1) 00:08:39.762 4032.985 - 4058.191: 0.0997% ( 2) 00:08:39.762 4058.191 - 4083.397: 0.1108% ( 2) 00:08:39.762 4083.397 - 4108.603: 0.1274% ( 3) 00:08:39.762 4108.603 - 4133.809: 0.1385% ( 2) 00:08:39.762 4133.809 - 4159.015: 0.1496% ( 2) 00:08:39.762 4159.015 - 4184.222: 0.1662% ( 3) 00:08:39.762 4184.222 - 4209.428: 0.1773% ( 2) 00:08:39.762 4209.428 - 4234.634: 0.1884% ( 2) 00:08:39.762 4234.634 - 4259.840: 0.2050% ( 3) 00:08:39.762 4259.840 - 4285.046: 0.2161% ( 2) 00:08:39.762 4285.046 - 4310.252: 0.2327% ( 3) 00:08:39.762 4310.252 - 4335.458: 0.2438% ( 2) 00:08:39.762 4335.458 - 4360.665: 0.2604% ( 3) 00:08:39.762 4360.665 - 4385.871: 0.2715% ( 2) 00:08:39.762 4385.871 - 4411.077: 0.2826% ( 2) 00:08:39.762 4411.077 - 4436.283: 0.2937% ( 2) 00:08:39.762 4436.283 - 4461.489: 0.3103% ( 3) 00:08:39.762 4461.489 - 4486.695: 0.3214% ( 2) 00:08:39.762 4486.695 - 4511.902: 0.3380% ( 3) 00:08:39.762 4511.902 - 4537.108: 0.3491% ( 2) 00:08:39.762 4537.108 - 4562.314: 0.3546% ( 1) 00:08:39.762 5520.148 - 5545.354: 0.3712% ( 3) 00:08:39.762 5545.354 - 5570.560: 0.3823% ( 2) 00:08:39.762 5570.560 - 5595.766: 0.3934% ( 2) 00:08:39.762 5595.766 - 5620.972: 0.4100% ( 3) 00:08:39.762 5620.972 - 5646.178: 0.4211% ( 2) 00:08:39.762 5646.178 - 5671.385: 0.4322% ( 2) 00:08:39.762 5671.385 - 5696.591: 0.4488% ( 3) 00:08:39.762 5696.591 - 5721.797: 0.4599% ( 2) 00:08:39.762 5721.797 - 5747.003: 0.4765% ( 3) 00:08:39.762 5747.003 - 5772.209: 0.4876% ( 2) 00:08:39.762 5772.209 - 5797.415: 0.4987% ( 2) 00:08:39.762 5797.415 - 5822.622: 0.5153% ( 3) 00:08:39.762 5822.622 - 5847.828: 0.5264% ( 2) 00:08:39.762 5847.828 - 5873.034: 0.5430% ( 3) 00:08:39.762 5873.034 - 5898.240: 0.5541% ( 2) 00:08:39.762 5898.240 - 5923.446: 0.5652% ( 2) 00:08:39.762 5923.446 - 5948.652: 0.5818% ( 3) 00:08:39.762 5948.652 - 5973.858: 0.5929% ( 2) 00:08:39.762 5973.858 - 5999.065: 0.6150% ( 4) 00:08:39.762 5999.065 - 6024.271: 0.6538% ( 7) 00:08:39.762 6024.271 - 6049.477: 0.7702% ( 21) 00:08:39.762 6049.477 - 6074.683: 0.9530% ( 33) 00:08:39.762 6074.683 - 6099.889: 1.2356% ( 51) 00:08:39.762 6099.889 - 6125.095: 1.9005% ( 120) 00:08:39.762 6125.095 - 6150.302: 3.0031% ( 199) 00:08:39.762 6150.302 - 6175.508: 4.4880% ( 268) 00:08:39.762 6175.508 - 6200.714: 6.4883% ( 361) 00:08:39.762 6200.714 - 6225.920: 8.6048% ( 382) 00:08:39.762 6225.920 - 6251.126: 10.8821% ( 411) 00:08:39.762 6251.126 - 6276.332: 13.1760% ( 414) 00:08:39.762 6276.332 - 6301.538: 15.4920% ( 418) 00:08:39.762 6301.538 - 6326.745: 17.7748% ( 412) 00:08:39.762 6326.745 - 6351.951: 20.0853% ( 417) 00:08:39.762 6351.951 - 6377.157: 22.3848% ( 415) 00:08:39.762 6377.157 - 6402.363: 24.7507% ( 427) 00:08:39.762 6402.363 - 6427.569: 27.2939% ( 459) 00:08:39.762 6427.569 - 6452.775: 29.8426% ( 460) 00:08:39.762 6452.775 - 6503.188: 34.8848% ( 910) 00:08:39.762 6503.188 - 6553.600: 39.8715% ( 900) 00:08:39.762 6553.600 - 6604.012: 44.9302% ( 913) 00:08:39.762 6604.012 - 6654.425: 49.9889% ( 913) 00:08:39.762 6654.425 - 6704.837: 55.1640% ( 934) 00:08:39.762 6704.837 - 6755.249: 60.2006% ( 909) 00:08:39.762 6755.249 - 6805.662: 65.3757% ( 934) 00:08:39.762 6805.662 - 6856.074: 70.4122% ( 909) 00:08:39.762 6856.074 - 6906.486: 75.2549% ( 874) 00:08:39.762 6906.486 - 6956.898: 79.6598% ( 795) 00:08:39.762 6956.898 - 7007.311: 82.7238% ( 553) 00:08:39.762 7007.311 - 7057.723: 84.5024% ( 321) 00:08:39.762 7057.723 - 7108.135: 85.7436% ( 224) 00:08:39.762 7108.135 - 7158.548: 86.5913% ( 153) 00:08:39.762 7158.548 - 7208.960: 87.1953% ( 109) 00:08:39.762 7208.960 - 7259.372: 87.5997% ( 73) 00:08:39.762 7259.372 - 7309.785: 87.8879% ( 52) 00:08:39.762 7309.785 - 7360.197: 88.1095% ( 40) 00:08:39.762 7360.197 - 7410.609: 88.3477% ( 43) 00:08:39.762 7410.609 - 7461.022: 88.5306% ( 33) 00:08:39.762 7461.022 - 7511.434: 88.6525% ( 22) 00:08:39.762 7511.434 - 7561.846: 88.7744% ( 22) 00:08:39.762 7561.846 - 7612.258: 88.8963% ( 22) 00:08:39.762 7612.258 - 7662.671: 89.0126% ( 21) 00:08:39.763 7662.671 - 7713.083: 89.1844% ( 31) 00:08:39.763 7713.083 - 7763.495: 89.3506% ( 30) 00:08:39.763 7763.495 - 7813.908: 89.4725% ( 22) 00:08:39.763 7813.908 - 7864.320: 89.6166% ( 26) 00:08:39.763 7864.320 - 7914.732: 89.7108% ( 17) 00:08:39.763 7914.732 - 7965.145: 89.8160% ( 19) 00:08:39.763 7965.145 - 8015.557: 89.9158% ( 18) 00:08:39.763 8015.557 - 8065.969: 90.0044% ( 16) 00:08:39.763 8065.969 - 8116.382: 90.0820% ( 14) 00:08:39.763 8116.382 - 8166.794: 90.1651% ( 15) 00:08:39.763 8166.794 - 8217.206: 90.2371% ( 13) 00:08:39.763 8217.206 - 8267.618: 90.3036% ( 12) 00:08:39.763 8267.618 - 8318.031: 90.3646% ( 11) 00:08:39.763 8318.031 - 8368.443: 90.4255% ( 11) 00:08:39.763 8368.443 - 8418.855: 90.4976% ( 13) 00:08:39.763 8418.855 - 8469.268: 90.5807% ( 15) 00:08:39.763 8469.268 - 8519.680: 90.7192% ( 25) 00:08:39.763 8519.680 - 8570.092: 90.8466% ( 23) 00:08:39.763 8570.092 - 8620.505: 90.9408% ( 17) 00:08:39.763 8620.505 - 8670.917: 91.0627% ( 22) 00:08:39.763 8670.917 - 8721.329: 91.2179% ( 28) 00:08:39.763 8721.329 - 8771.742: 91.3176% ( 18) 00:08:39.763 8771.742 - 8822.154: 91.4062% ( 16) 00:08:39.763 8822.154 - 8872.566: 91.5281% ( 22) 00:08:39.763 8872.566 - 8922.978: 91.6390% ( 20) 00:08:39.763 8922.978 - 8973.391: 91.7165% ( 14) 00:08:39.763 8973.391 - 9023.803: 91.8163% ( 18) 00:08:39.763 9023.803 - 9074.215: 91.9326% ( 21) 00:08:39.763 9074.215 - 9124.628: 92.0767% ( 26) 00:08:39.763 9124.628 - 9175.040: 92.2041% ( 23) 00:08:39.763 9175.040 - 9225.452: 92.3039% ( 18) 00:08:39.763 9225.452 - 9275.865: 92.4479% ( 26) 00:08:39.763 9275.865 - 9326.277: 92.6197% ( 31) 00:08:39.763 9326.277 - 9376.689: 92.7527% ( 24) 00:08:39.763 9376.689 - 9427.102: 92.8469% ( 17) 00:08:39.763 9427.102 - 9477.514: 92.9632% ( 21) 00:08:39.763 9477.514 - 9527.926: 93.1128% ( 27) 00:08:39.763 9527.926 - 9578.338: 93.3012% ( 34) 00:08:39.763 9578.338 - 9628.751: 93.5173% ( 39) 00:08:39.763 9628.751 - 9679.163: 93.6891% ( 31) 00:08:39.763 9679.163 - 9729.575: 93.8331% ( 26) 00:08:39.763 9729.575 - 9779.988: 93.9772% ( 26) 00:08:39.763 9779.988 - 9830.400: 94.0880% ( 20) 00:08:39.763 9830.400 - 9880.812: 94.2154% ( 23) 00:08:39.763 9880.812 - 9931.225: 94.3318% ( 21) 00:08:39.763 9931.225 - 9981.637: 94.4814% ( 27) 00:08:39.763 9981.637 - 10032.049: 94.6753% ( 35) 00:08:39.763 10032.049 - 10082.462: 94.9025% ( 41) 00:08:39.763 10082.462 - 10132.874: 95.0632% ( 29) 00:08:39.763 10132.874 - 10183.286: 95.2571% ( 35) 00:08:39.763 10183.286 - 10233.698: 95.4787% ( 40) 00:08:39.763 10233.698 - 10284.111: 95.7613% ( 51) 00:08:39.763 10284.111 - 10334.523: 95.9552% ( 35) 00:08:39.763 10334.523 - 10384.935: 96.1492% ( 35) 00:08:39.763 10384.935 - 10435.348: 96.3098% ( 29) 00:08:39.763 10435.348 - 10485.760: 96.4761% ( 30) 00:08:39.763 10485.760 - 10536.172: 96.6312% ( 28) 00:08:39.763 10536.172 - 10586.585: 96.7808% ( 27) 00:08:39.763 10586.585 - 10636.997: 96.9526% ( 31) 00:08:39.763 10636.997 - 10687.409: 97.0911% ( 25) 00:08:39.763 10687.409 - 10737.822: 97.2296% ( 25) 00:08:39.763 10737.822 - 10788.234: 97.3404% ( 20) 00:08:39.763 10788.234 - 10838.646: 97.4402% ( 18) 00:08:39.763 10838.646 - 10889.058: 97.5454% ( 19) 00:08:39.763 10889.058 - 10939.471: 97.6230% ( 14) 00:08:39.763 10939.471 - 10989.883: 97.6784% ( 10) 00:08:39.763 10989.883 - 11040.295: 97.7283% ( 9) 00:08:39.763 11040.295 - 11090.708: 97.7504% ( 4) 00:08:39.763 11090.708 - 11141.120: 97.7726% ( 4) 00:08:39.763 11141.120 - 11191.532: 97.7892% ( 3) 00:08:39.763 11191.532 - 11241.945: 97.8003% ( 2) 00:08:39.763 11241.945 - 11292.357: 97.8114% ( 2) 00:08:39.763 11292.357 - 11342.769: 97.8336% ( 4) 00:08:39.763 11342.769 - 11393.182: 97.8446% ( 2) 00:08:39.763 11393.182 - 11443.594: 97.8668% ( 4) 00:08:39.763 11443.594 - 11494.006: 97.8890% ( 4) 00:08:39.763 11494.006 - 11544.418: 97.9111% ( 4) 00:08:39.763 11544.418 - 11594.831: 97.9333% ( 4) 00:08:39.763 11594.831 - 11645.243: 97.9444% ( 2) 00:08:39.763 11645.243 - 11695.655: 97.9555% ( 2) 00:08:39.763 11695.655 - 11746.068: 97.9721% ( 3) 00:08:39.763 11746.068 - 11796.480: 97.9832% ( 2) 00:08:39.763 11796.480 - 11846.892: 97.9942% ( 2) 00:08:39.763 11846.892 - 11897.305: 98.0109% ( 3) 00:08:39.763 11897.305 - 11947.717: 98.0330% ( 4) 00:08:39.763 11947.717 - 11998.129: 98.0496% ( 3) 00:08:39.763 11998.129 - 12048.542: 98.0718% ( 4) 00:08:39.763 12048.542 - 12098.954: 98.0884% ( 3) 00:08:39.763 12098.954 - 12149.366: 98.1106% ( 4) 00:08:39.763 12149.366 - 12199.778: 98.1272% ( 3) 00:08:39.763 12199.778 - 12250.191: 98.1494% ( 4) 00:08:39.763 12250.191 - 12300.603: 98.1715% ( 4) 00:08:39.763 12300.603 - 12351.015: 98.1882% ( 3) 00:08:39.763 12351.015 - 12401.428: 98.2048% ( 3) 00:08:39.763 12401.428 - 12451.840: 98.2270% ( 4) 00:08:39.763 12905.551 - 13006.375: 98.2325% ( 1) 00:08:39.763 13006.375 - 13107.200: 98.2602% ( 5) 00:08:39.763 13107.200 - 13208.025: 98.2768% ( 3) 00:08:39.763 13208.025 - 13308.849: 98.2934% ( 3) 00:08:39.763 13308.849 - 13409.674: 98.3156% ( 4) 00:08:39.763 13409.674 - 13510.498: 98.3599% ( 8) 00:08:39.763 13510.498 - 13611.323: 98.4486% ( 16) 00:08:39.763 13611.323 - 13712.148: 98.5206% ( 13) 00:08:39.763 13712.148 - 13812.972: 98.5926% ( 13) 00:08:39.763 13812.972 - 13913.797: 98.6591% ( 12) 00:08:39.763 13913.797 - 14014.622: 98.7312% ( 13) 00:08:39.763 14014.622 - 14115.446: 98.8143% ( 15) 00:08:39.763 14115.446 - 14216.271: 98.9140% ( 18) 00:08:39.763 14216.271 - 14317.095: 98.9805% ( 12) 00:08:39.763 14317.095 - 14417.920: 99.0470% ( 12) 00:08:39.763 14417.920 - 14518.745: 99.0913% ( 8) 00:08:39.763 14518.745 - 14619.569: 99.1246% ( 6) 00:08:39.763 14619.569 - 14720.394: 99.1523% ( 5) 00:08:39.763 14720.394 - 14821.218: 99.1910% ( 7) 00:08:39.763 14821.218 - 14922.043: 99.2243% ( 6) 00:08:39.763 14922.043 - 15022.868: 99.2631% ( 7) 00:08:39.763 15022.868 - 15123.692: 99.2908% ( 5) 00:08:39.763 21374.818 - 21475.643: 99.3019% ( 2) 00:08:39.763 21475.643 - 21576.468: 99.3351% ( 6) 00:08:39.763 21576.468 - 21677.292: 99.3684% ( 6) 00:08:39.763 21677.292 - 21778.117: 99.4071% ( 7) 00:08:39.763 21778.117 - 21878.942: 99.4404% ( 6) 00:08:39.763 21878.942 - 21979.766: 99.4681% ( 5) 00:08:39.763 21979.766 - 22080.591: 99.5013% ( 6) 00:08:39.763 22080.591 - 22181.415: 99.5346% ( 6) 00:08:39.763 22181.415 - 22282.240: 99.5734% ( 7) 00:08:39.763 22282.240 - 22383.065: 99.6121% ( 7) 00:08:39.763 22383.065 - 22483.889: 99.6454% ( 6) 00:08:39.763 30247.385 - 30449.034: 99.6676% ( 4) 00:08:39.763 30449.034 - 30650.683: 99.7119% ( 8) 00:08:39.763 30650.683 - 30852.332: 99.7562% ( 8) 00:08:39.763 30852.332 - 31053.982: 99.8005% ( 8) 00:08:39.763 31053.982 - 31255.631: 99.8726% ( 13) 00:08:39.763 31255.631 - 31457.280: 99.9446% ( 13) 00:08:39.763 31457.280 - 31658.929: 100.0000% ( 10) 00:08:39.763 00:08:39.763 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:39.763 ============================================================================== 00:08:39.763 Range in us Cumulative IO count 00:08:39.763 3730.511 - 3755.717: 0.0332% ( 6) 00:08:39.763 3755.717 - 3780.923: 0.0609% ( 5) 00:08:39.763 3780.923 - 3806.129: 0.0776% ( 3) 00:08:39.763 3806.129 - 3831.335: 0.0887% ( 2) 00:08:39.763 3831.335 - 3856.542: 0.0942% ( 1) 00:08:39.763 3881.748 - 3906.954: 0.1053% ( 2) 00:08:39.763 3906.954 - 3932.160: 0.1274% ( 4) 00:08:39.763 3932.160 - 3957.366: 0.1385% ( 2) 00:08:39.763 3957.366 - 3982.572: 0.1551% ( 3) 00:08:39.763 3982.572 - 4007.778: 0.1773% ( 4) 00:08:39.763 4007.778 - 4032.985: 0.1939% ( 3) 00:08:39.763 4032.985 - 4058.191: 0.2050% ( 2) 00:08:39.763 4058.191 - 4083.397: 0.2216% ( 3) 00:08:39.763 4083.397 - 4108.603: 0.2327% ( 2) 00:08:39.763 4108.603 - 4133.809: 0.2438% ( 2) 00:08:39.763 4133.809 - 4159.015: 0.2604% ( 3) 00:08:39.763 4159.015 - 4184.222: 0.2660% ( 1) 00:08:39.763 4184.222 - 4209.428: 0.2826% ( 3) 00:08:39.763 4209.428 - 4234.634: 0.2937% ( 2) 00:08:39.764 4234.634 - 4259.840: 0.3047% ( 2) 00:08:39.764 4259.840 - 4285.046: 0.3158% ( 2) 00:08:39.764 4285.046 - 4310.252: 0.3269% ( 2) 00:08:39.764 4310.252 - 4335.458: 0.3435% ( 3) 00:08:39.764 4335.458 - 4360.665: 0.3546% ( 2) 00:08:39.764 5293.292 - 5318.498: 0.3602% ( 1) 00:08:39.764 5318.498 - 5343.705: 0.3823% ( 4) 00:08:39.764 5343.705 - 5368.911: 0.3934% ( 2) 00:08:39.764 5368.911 - 5394.117: 0.4045% ( 2) 00:08:39.764 5394.117 - 5419.323: 0.4156% ( 2) 00:08:39.764 5419.323 - 5444.529: 0.4322% ( 3) 00:08:39.764 5444.529 - 5469.735: 0.4488% ( 3) 00:08:39.764 5469.735 - 5494.942: 0.4599% ( 2) 00:08:39.764 5494.942 - 5520.148: 0.4710% ( 2) 00:08:39.764 5520.148 - 5545.354: 0.4820% ( 2) 00:08:39.764 5545.354 - 5570.560: 0.4987% ( 3) 00:08:39.764 5570.560 - 5595.766: 0.5098% ( 2) 00:08:39.764 5595.766 - 5620.972: 0.5208% ( 2) 00:08:39.764 5620.972 - 5646.178: 0.5375% ( 3) 00:08:39.764 5646.178 - 5671.385: 0.5485% ( 2) 00:08:39.764 5671.385 - 5696.591: 0.5596% ( 2) 00:08:39.764 5696.591 - 5721.797: 0.5762% ( 3) 00:08:39.764 5721.797 - 5747.003: 0.5873% ( 2) 00:08:39.764 5747.003 - 5772.209: 0.5984% ( 2) 00:08:39.764 5772.209 - 5797.415: 0.6150% ( 3) 00:08:39.764 5797.415 - 5822.622: 0.6261% ( 2) 00:08:39.764 5822.622 - 5847.828: 0.6427% ( 3) 00:08:39.764 5847.828 - 5873.034: 0.6538% ( 2) 00:08:39.764 5873.034 - 5898.240: 0.6649% ( 2) 00:08:39.764 5898.240 - 5923.446: 0.6760% ( 2) 00:08:39.764 5923.446 - 5948.652: 0.6926% ( 3) 00:08:39.764 5948.652 - 5973.858: 0.7037% ( 2) 00:08:39.764 5973.858 - 5999.065: 0.7369% ( 6) 00:08:39.764 5999.065 - 6024.271: 0.7868% ( 9) 00:08:39.764 6024.271 - 6049.477: 0.9142% ( 23) 00:08:39.764 6049.477 - 6074.683: 1.0860% ( 31) 00:08:39.764 6074.683 - 6099.889: 1.4018% ( 57) 00:08:39.764 6099.889 - 6125.095: 1.8783% ( 86) 00:08:39.764 6125.095 - 6150.302: 2.9089% ( 186) 00:08:39.764 6150.302 - 6175.508: 4.4770% ( 283) 00:08:39.764 6175.508 - 6200.714: 6.4938% ( 364) 00:08:39.764 6200.714 - 6225.920: 8.6048% ( 381) 00:08:39.764 6225.920 - 6251.126: 10.7602% ( 389) 00:08:39.764 6251.126 - 6276.332: 12.7660% ( 362) 00:08:39.764 6276.332 - 6301.538: 14.8715% ( 380) 00:08:39.764 6301.538 - 6326.745: 17.2263% ( 425) 00:08:39.764 6326.745 - 6351.951: 19.8471% ( 473) 00:08:39.764 6351.951 - 6377.157: 22.4346% ( 467) 00:08:39.764 6377.157 - 6402.363: 24.9723% ( 458) 00:08:39.764 6402.363 - 6427.569: 27.3548% ( 430) 00:08:39.764 6427.569 - 6452.775: 29.8260% ( 446) 00:08:39.764 6452.775 - 6503.188: 34.8460% ( 906) 00:08:39.764 6503.188 - 6553.600: 39.9934% ( 929) 00:08:39.764 6553.600 - 6604.012: 44.9246% ( 890) 00:08:39.764 6604.012 - 6654.425: 49.9446% ( 906) 00:08:39.764 6654.425 - 6704.837: 55.1585% ( 941) 00:08:39.764 6704.837 - 6755.249: 60.2394% ( 917) 00:08:39.764 6755.249 - 6805.662: 65.3258% ( 918) 00:08:39.764 6805.662 - 6856.074: 70.4455% ( 924) 00:08:39.764 6856.074 - 6906.486: 75.4322% ( 900) 00:08:39.764 6906.486 - 6956.898: 79.8149% ( 791) 00:08:39.764 6956.898 - 7007.311: 82.8014% ( 539) 00:08:39.764 7007.311 - 7057.723: 84.6077% ( 326) 00:08:39.764 7057.723 - 7108.135: 85.8378% ( 222) 00:08:39.764 7108.135 - 7158.548: 86.5747% ( 133) 00:08:39.764 7158.548 - 7208.960: 87.1288% ( 100) 00:08:39.764 7208.960 - 7259.372: 87.4446% ( 57) 00:08:39.764 7259.372 - 7309.785: 87.6662% ( 40) 00:08:39.764 7309.785 - 7360.197: 87.8546% ( 34) 00:08:39.764 7360.197 - 7410.609: 88.0818% ( 41) 00:08:39.764 7410.609 - 7461.022: 88.3034% ( 40) 00:08:39.764 7461.022 - 7511.434: 88.4807% ( 32) 00:08:39.764 7511.434 - 7561.846: 88.6469% ( 30) 00:08:39.764 7561.846 - 7612.258: 88.7578% ( 20) 00:08:39.764 7612.258 - 7662.671: 88.8797% ( 22) 00:08:39.764 7662.671 - 7713.083: 88.9517% ( 13) 00:08:39.764 7713.083 - 7763.495: 89.0348% ( 15) 00:08:39.764 7763.495 - 7813.908: 89.1290% ( 17) 00:08:39.764 7813.908 - 7864.320: 89.2121% ( 15) 00:08:39.764 7864.320 - 7914.732: 89.3118% ( 18) 00:08:39.764 7914.732 - 7965.145: 89.4060% ( 17) 00:08:39.764 7965.145 - 8015.557: 89.5224% ( 21) 00:08:39.764 8015.557 - 8065.969: 89.6609% ( 25) 00:08:39.764 8065.969 - 8116.382: 89.7717% ( 20) 00:08:39.764 8116.382 - 8166.794: 89.9213% ( 27) 00:08:39.764 8166.794 - 8217.206: 90.0543% ( 24) 00:08:39.764 8217.206 - 8267.618: 90.1707% ( 21) 00:08:39.764 8267.618 - 8318.031: 90.2926% ( 22) 00:08:39.764 8318.031 - 8368.443: 90.4145% ( 22) 00:08:39.764 8368.443 - 8418.855: 90.5419% ( 23) 00:08:39.764 8418.855 - 8469.268: 90.6472% ( 19) 00:08:39.764 8469.268 - 8519.680: 90.7746% ( 23) 00:08:39.764 8519.680 - 8570.092: 90.9076% ( 24) 00:08:39.764 8570.092 - 8620.505: 91.0239% ( 21) 00:08:39.764 8620.505 - 8670.917: 91.1569% ( 24) 00:08:39.764 8670.917 - 8721.329: 91.2566% ( 18) 00:08:39.764 8721.329 - 8771.742: 91.3619% ( 19) 00:08:39.764 8771.742 - 8822.154: 91.4727% ( 20) 00:08:39.764 8822.154 - 8872.566: 91.6002% ( 23) 00:08:39.764 8872.566 - 8922.978: 91.7221% ( 22) 00:08:39.764 8922.978 - 8973.391: 91.8772% ( 28) 00:08:39.764 8973.391 - 9023.803: 92.0268% ( 27) 00:08:39.764 9023.803 - 9074.215: 92.1653% ( 25) 00:08:39.764 9074.215 - 9124.628: 92.2928% ( 23) 00:08:39.764 9124.628 - 9175.040: 92.4313% ( 25) 00:08:39.764 9175.040 - 9225.452: 92.6308% ( 36) 00:08:39.764 9225.452 - 9275.865: 92.7914% ( 29) 00:08:39.764 9275.865 - 9326.277: 92.9854% ( 35) 00:08:39.764 9326.277 - 9376.689: 93.1682% ( 33) 00:08:39.764 9376.689 - 9427.102: 93.3012% ( 24) 00:08:39.764 9427.102 - 9477.514: 93.4397% ( 25) 00:08:39.764 9477.514 - 9527.926: 93.6004% ( 29) 00:08:39.764 9527.926 - 9578.338: 93.7722% ( 31) 00:08:39.764 9578.338 - 9628.751: 93.9772% ( 37) 00:08:39.764 9628.751 - 9679.163: 94.1268% ( 27) 00:08:39.764 9679.163 - 9729.575: 94.2819% ( 28) 00:08:39.764 9729.575 - 9779.988: 94.4371% ( 28) 00:08:39.764 9779.988 - 9830.400: 94.5645% ( 23) 00:08:39.764 9830.400 - 9880.812: 94.7529% ( 34) 00:08:39.764 9880.812 - 9931.225: 94.9357% ( 33) 00:08:39.764 9931.225 - 9981.637: 95.0798% ( 26) 00:08:39.764 9981.637 - 10032.049: 95.2183% ( 25) 00:08:39.764 10032.049 - 10082.462: 95.3845% ( 30) 00:08:39.764 10082.462 - 10132.874: 95.5452% ( 29) 00:08:39.764 10132.874 - 10183.286: 95.7004% ( 28) 00:08:39.764 10183.286 - 10233.698: 95.8389% ( 25) 00:08:39.764 10233.698 - 10284.111: 95.9996% ( 29) 00:08:39.764 10284.111 - 10334.523: 96.0771% ( 14) 00:08:39.764 10334.523 - 10384.935: 96.1713% ( 17) 00:08:39.764 10384.935 - 10435.348: 96.3320% ( 29) 00:08:39.764 10435.348 - 10485.760: 96.5259% ( 35) 00:08:39.764 10485.760 - 10536.172: 96.6645% ( 25) 00:08:39.764 10536.172 - 10586.585: 96.8307% ( 30) 00:08:39.764 10586.585 - 10636.997: 96.9747% ( 26) 00:08:39.764 10636.997 - 10687.409: 97.1243% ( 27) 00:08:39.764 10687.409 - 10737.822: 97.2296% ( 19) 00:08:39.764 10737.822 - 10788.234: 97.3293% ( 18) 00:08:39.764 10788.234 - 10838.646: 97.4235% ( 17) 00:08:39.764 10838.646 - 10889.058: 97.4956% ( 13) 00:08:39.764 10889.058 - 10939.471: 97.5621% ( 12) 00:08:39.764 10939.471 - 10989.883: 97.6285% ( 12) 00:08:39.764 10989.883 - 11040.295: 97.6840% ( 10) 00:08:39.764 11040.295 - 11090.708: 97.7227% ( 7) 00:08:39.764 11090.708 - 11141.120: 97.7671% ( 8) 00:08:39.764 11141.120 - 11191.532: 97.8003% ( 6) 00:08:39.764 11191.532 - 11241.945: 97.8225% ( 4) 00:08:39.764 11241.945 - 11292.357: 97.8446% ( 4) 00:08:39.764 11292.357 - 11342.769: 97.8613% ( 3) 00:08:39.764 11342.769 - 11393.182: 97.8723% ( 2) 00:08:39.764 11544.418 - 11594.831: 97.8945% ( 4) 00:08:39.764 11594.831 - 11645.243: 97.9111% ( 3) 00:08:39.764 11645.243 - 11695.655: 97.9333% ( 4) 00:08:39.764 11695.655 - 11746.068: 97.9499% ( 3) 00:08:39.764 11746.068 - 11796.480: 97.9721% ( 4) 00:08:39.764 11796.480 - 11846.892: 97.9942% ( 4) 00:08:39.764 11846.892 - 11897.305: 98.0109% ( 3) 00:08:39.764 11897.305 - 11947.717: 98.0275% ( 3) 00:08:39.764 11947.717 - 11998.129: 98.0496% ( 4) 00:08:39.764 11998.129 - 12048.542: 98.0663% ( 3) 00:08:39.765 12048.542 - 12098.954: 98.0829% ( 3) 00:08:39.765 12098.954 - 12149.366: 98.1051% ( 4) 00:08:39.765 12149.366 - 12199.778: 98.1272% ( 4) 00:08:39.765 12199.778 - 12250.191: 98.1438% ( 3) 00:08:39.765 12250.191 - 12300.603: 98.1660% ( 4) 00:08:39.765 12300.603 - 12351.015: 98.1826% ( 3) 00:08:39.765 12351.015 - 12401.428: 98.1992% ( 3) 00:08:39.765 12401.428 - 12451.840: 98.2159% ( 3) 00:08:39.765 12451.840 - 12502.252: 98.2270% ( 2) 00:08:39.765 12552.665 - 12603.077: 98.2436% ( 3) 00:08:39.765 12603.077 - 12653.489: 98.2602% ( 3) 00:08:39.765 12653.489 - 12703.902: 98.2824% ( 4) 00:08:39.765 12703.902 - 12754.314: 98.2990% ( 3) 00:08:39.765 12754.314 - 12804.726: 98.3211% ( 4) 00:08:39.765 12804.726 - 12855.138: 98.3433% ( 4) 00:08:39.765 12855.138 - 12905.551: 98.3599% ( 3) 00:08:39.765 12905.551 - 13006.375: 98.4043% ( 8) 00:08:39.765 13006.375 - 13107.200: 98.4430% ( 7) 00:08:39.765 13107.200 - 13208.025: 98.4763% ( 6) 00:08:39.765 13208.025 - 13308.849: 98.5151% ( 7) 00:08:39.765 13308.849 - 13409.674: 98.5539% ( 7) 00:08:39.765 13409.674 - 13510.498: 98.5760% ( 4) 00:08:39.765 13510.498 - 13611.323: 98.6259% ( 9) 00:08:39.765 13611.323 - 13712.148: 98.6536% ( 5) 00:08:39.765 13712.148 - 13812.972: 98.7035% ( 9) 00:08:39.765 13812.972 - 13913.797: 98.7644% ( 11) 00:08:39.765 13913.797 - 14014.622: 98.8309% ( 12) 00:08:39.765 14014.622 - 14115.446: 98.9029% ( 13) 00:08:39.765 14115.446 - 14216.271: 98.9750% ( 13) 00:08:39.765 14216.271 - 14317.095: 99.0470% ( 13) 00:08:39.765 14317.095 - 14417.920: 99.1190% ( 13) 00:08:39.765 14417.920 - 14518.745: 99.1800% ( 11) 00:08:39.765 14518.745 - 14619.569: 99.2243% ( 8) 00:08:39.765 14619.569 - 14720.394: 99.2575% ( 6) 00:08:39.765 14720.394 - 14821.218: 99.2852% ( 5) 00:08:39.765 14821.218 - 14922.043: 99.2908% ( 1) 00:08:39.765 22080.591 - 22181.415: 99.2963% ( 1) 00:08:39.765 22181.415 - 22282.240: 99.3296% ( 6) 00:08:39.765 22282.240 - 22383.065: 99.3684% ( 7) 00:08:39.765 22383.065 - 22483.889: 99.4016% ( 6) 00:08:39.765 22483.889 - 22584.714: 99.4404% ( 7) 00:08:39.765 22584.714 - 22685.538: 99.4736% ( 6) 00:08:39.765 22685.538 - 22786.363: 99.5124% ( 7) 00:08:39.765 22786.363 - 22887.188: 99.5457% ( 6) 00:08:39.765 22887.188 - 22988.012: 99.5789% ( 6) 00:08:39.765 22988.012 - 23088.837: 99.6177% ( 7) 00:08:39.765 23088.837 - 23189.662: 99.6454% ( 5) 00:08:39.765 30247.385 - 30449.034: 99.7008% ( 10) 00:08:39.765 30449.034 - 30650.683: 99.7673% ( 12) 00:08:39.765 30650.683 - 30852.332: 99.8393% ( 13) 00:08:39.765 30852.332 - 31053.982: 99.9113% ( 13) 00:08:39.765 31053.982 - 31255.631: 99.9778% ( 12) 00:08:39.765 31255.631 - 31457.280: 100.0000% ( 4) 00:08:39.765 00:08:39.765 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:39.765 ============================================================================== 00:08:39.765 Range in us Cumulative IO count 00:08:39.765 3478.449 - 3503.655: 0.0055% ( 1) 00:08:39.765 3503.655 - 3528.862: 0.0554% ( 9) 00:08:39.765 3528.862 - 3554.068: 0.0831% ( 5) 00:08:39.765 3554.068 - 3579.274: 0.0942% ( 2) 00:08:39.765 3579.274 - 3604.480: 0.0997% ( 1) 00:08:39.765 3629.686 - 3654.892: 0.1108% ( 2) 00:08:39.765 3654.892 - 3680.098: 0.1219% ( 2) 00:08:39.765 3680.098 - 3705.305: 0.1330% ( 2) 00:08:39.765 3705.305 - 3730.511: 0.1385% ( 1) 00:08:39.765 3730.511 - 3755.717: 0.1496% ( 2) 00:08:39.765 3755.717 - 3780.923: 0.1607% ( 2) 00:08:39.765 3780.923 - 3806.129: 0.1773% ( 3) 00:08:39.765 3806.129 - 3831.335: 0.1884% ( 2) 00:08:39.765 3831.335 - 3856.542: 0.1995% ( 2) 00:08:39.765 3856.542 - 3881.748: 0.2161% ( 3) 00:08:39.765 3881.748 - 3906.954: 0.2272% ( 2) 00:08:39.765 3906.954 - 3932.160: 0.2383% ( 2) 00:08:39.765 3932.160 - 3957.366: 0.2549% ( 3) 00:08:39.765 3957.366 - 3982.572: 0.2660% ( 2) 00:08:39.765 3982.572 - 4007.778: 0.2826% ( 3) 00:08:39.765 4007.778 - 4032.985: 0.2937% ( 2) 00:08:39.765 4032.985 - 4058.191: 0.3047% ( 2) 00:08:39.765 4058.191 - 4083.397: 0.3214% ( 3) 00:08:39.765 4083.397 - 4108.603: 0.3324% ( 2) 00:08:39.765 4108.603 - 4133.809: 0.3491% ( 3) 00:08:39.765 4133.809 - 4159.015: 0.3546% ( 1) 00:08:39.765 5091.643 - 5116.849: 0.3657% ( 2) 00:08:39.765 5116.849 - 5142.055: 0.3879% ( 4) 00:08:39.765 5142.055 - 5167.262: 0.3989% ( 2) 00:08:39.765 5167.262 - 5192.468: 0.4156% ( 3) 00:08:39.765 5192.468 - 5217.674: 0.4266% ( 2) 00:08:39.765 5217.674 - 5242.880: 0.4377% ( 2) 00:08:39.765 5242.880 - 5268.086: 0.4543% ( 3) 00:08:39.765 5268.086 - 5293.292: 0.4710% ( 3) 00:08:39.765 5293.292 - 5318.498: 0.4820% ( 2) 00:08:39.765 5318.498 - 5343.705: 0.4931% ( 2) 00:08:39.765 5343.705 - 5368.911: 0.5042% ( 2) 00:08:39.765 5368.911 - 5394.117: 0.5153% ( 2) 00:08:39.765 5394.117 - 5419.323: 0.5264% ( 2) 00:08:39.765 5419.323 - 5444.529: 0.5430% ( 3) 00:08:39.765 5444.529 - 5469.735: 0.5541% ( 2) 00:08:39.765 5469.735 - 5494.942: 0.5707% ( 3) 00:08:39.765 5494.942 - 5520.148: 0.5818% ( 2) 00:08:39.765 5520.148 - 5545.354: 0.5929% ( 2) 00:08:39.765 5545.354 - 5570.560: 0.6095% ( 3) 00:08:39.765 5570.560 - 5595.766: 0.6206% ( 2) 00:08:39.765 5595.766 - 5620.972: 0.6372% ( 3) 00:08:39.765 5620.972 - 5646.178: 0.6483% ( 2) 00:08:39.765 5646.178 - 5671.385: 0.6649% ( 3) 00:08:39.765 5671.385 - 5696.591: 0.6760% ( 2) 00:08:39.765 5696.591 - 5721.797: 0.6871% ( 2) 00:08:39.765 5721.797 - 5747.003: 0.7037% ( 3) 00:08:39.765 5747.003 - 5772.209: 0.7092% ( 1) 00:08:39.765 5973.858 - 5999.065: 0.7369% ( 5) 00:08:39.765 5999.065 - 6024.271: 0.7591% ( 4) 00:08:39.765 6024.271 - 6049.477: 0.7979% ( 7) 00:08:39.765 6049.477 - 6074.683: 0.9364% ( 25) 00:08:39.765 6074.683 - 6099.889: 1.2522% ( 57) 00:08:39.765 6099.889 - 6125.095: 1.8229% ( 103) 00:08:39.765 6125.095 - 6150.302: 2.6596% ( 151) 00:08:39.765 6150.302 - 6175.508: 4.0004% ( 242) 00:08:39.765 6175.508 - 6200.714: 5.8898% ( 341) 00:08:39.765 6200.714 - 6225.920: 8.4275% ( 458) 00:08:39.765 6225.920 - 6251.126: 10.8766% ( 442) 00:08:39.765 6251.126 - 6276.332: 13.2369% ( 426) 00:08:39.765 6276.332 - 6301.538: 15.5751% ( 422) 00:08:39.765 6301.538 - 6326.745: 17.9300% ( 425) 00:08:39.765 6326.745 - 6351.951: 20.1352% ( 398) 00:08:39.765 6351.951 - 6377.157: 22.4402% ( 416) 00:08:39.765 6377.157 - 6402.363: 24.9003% ( 444) 00:08:39.765 6402.363 - 6427.569: 27.3604% ( 444) 00:08:39.765 6427.569 - 6452.775: 29.8925% ( 457) 00:08:39.765 6452.775 - 6503.188: 34.7906% ( 884) 00:08:39.765 6503.188 - 6553.600: 39.7994% ( 904) 00:08:39.765 6553.600 - 6604.012: 44.8803% ( 917) 00:08:39.765 6604.012 - 6654.425: 50.0609% ( 935) 00:08:39.765 6654.425 - 6704.837: 55.2028% ( 928) 00:08:39.765 6704.837 - 6755.249: 60.2837% ( 917) 00:08:39.765 6755.249 - 6805.662: 65.5419% ( 949) 00:08:39.765 6805.662 - 6856.074: 70.6948% ( 930) 00:08:39.765 6856.074 - 6906.486: 75.6095% ( 887) 00:08:39.765 6906.486 - 6956.898: 79.7983% ( 756) 00:08:39.765 6956.898 - 7007.311: 82.7072% ( 525) 00:08:39.765 7007.311 - 7057.723: 84.5634% ( 335) 00:08:39.765 7057.723 - 7108.135: 85.8156% ( 226) 00:08:39.765 7108.135 - 7158.548: 86.6356% ( 148) 00:08:39.765 7158.548 - 7208.960: 87.1620% ( 95) 00:08:39.765 7208.960 - 7259.372: 87.5499% ( 70) 00:08:39.765 7259.372 - 7309.785: 87.7937% ( 44) 00:08:39.765 7309.785 - 7360.197: 88.0208% ( 41) 00:08:39.765 7360.197 - 7410.609: 88.2148% ( 35) 00:08:39.765 7410.609 - 7461.022: 88.3754% ( 29) 00:08:39.765 7461.022 - 7511.434: 88.5140% ( 25) 00:08:39.765 7511.434 - 7561.846: 88.6359% ( 22) 00:08:39.765 7561.846 - 7612.258: 88.7633% ( 23) 00:08:39.765 7612.258 - 7662.671: 88.8575% ( 17) 00:08:39.765 7662.671 - 7713.083: 88.9572% ( 18) 00:08:39.765 7713.083 - 7763.495: 89.0680% ( 20) 00:08:39.765 7763.495 - 7813.908: 89.1622% ( 17) 00:08:39.765 7813.908 - 7864.320: 89.2398% ( 14) 00:08:39.765 7864.320 - 7914.732: 89.3395% ( 18) 00:08:39.765 7914.732 - 7965.145: 89.4614% ( 22) 00:08:39.765 7965.145 - 8015.557: 89.5556% ( 17) 00:08:39.765 8015.557 - 8065.969: 89.6443% ( 16) 00:08:39.765 8065.969 - 8116.382: 89.7440% ( 18) 00:08:39.765 8116.382 - 8166.794: 89.8493% ( 19) 00:08:39.765 8166.794 - 8217.206: 89.9324% ( 15) 00:08:39.765 8217.206 - 8267.618: 90.0432% ( 20) 00:08:39.765 8267.618 - 8318.031: 90.2371% ( 35) 00:08:39.765 8318.031 - 8368.443: 90.4089% ( 31) 00:08:39.765 8368.443 - 8418.855: 90.5253% ( 21) 00:08:39.765 8418.855 - 8469.268: 90.6804% ( 28) 00:08:39.765 8469.268 - 8519.680: 90.7968% ( 21) 00:08:39.765 8519.680 - 8570.092: 90.9353% ( 25) 00:08:39.765 8570.092 - 8620.505: 91.0793% ( 26) 00:08:39.765 8620.505 - 8670.917: 91.2622% ( 33) 00:08:39.765 8670.917 - 8721.329: 91.4450% ( 33) 00:08:39.765 8721.329 - 8771.742: 91.6279% ( 33) 00:08:39.765 8771.742 - 8822.154: 91.7830% ( 28) 00:08:39.765 8822.154 - 8872.566: 91.9160% ( 24) 00:08:39.765 8872.566 - 8922.978: 92.0379% ( 22) 00:08:39.765 8922.978 - 8973.391: 92.1653% ( 23) 00:08:39.765 8973.391 - 9023.803: 92.3039% ( 25) 00:08:39.765 9023.803 - 9074.215: 92.5089% ( 37) 00:08:39.765 9074.215 - 9124.628: 92.6585% ( 27) 00:08:39.765 9124.628 - 9175.040: 92.7748% ( 21) 00:08:39.765 9175.040 - 9225.452: 92.8856% ( 20) 00:08:39.765 9225.452 - 9275.865: 93.0519% ( 30) 00:08:39.765 9275.865 - 9326.277: 93.1904% ( 25) 00:08:39.765 9326.277 - 9376.689: 93.3289% ( 25) 00:08:39.765 9376.689 - 9427.102: 93.4674% ( 25) 00:08:39.765 9427.102 - 9477.514: 93.6004% ( 24) 00:08:39.765 9477.514 - 9527.926: 93.7389% ( 25) 00:08:39.765 9527.926 - 9578.338: 93.8664% ( 23) 00:08:39.765 9578.338 - 9628.751: 94.0160% ( 27) 00:08:39.765 9628.751 - 9679.163: 94.1822% ( 30) 00:08:39.766 9679.163 - 9729.575: 94.3207% ( 25) 00:08:39.766 9729.575 - 9779.988: 94.4371% ( 21) 00:08:39.766 9779.988 - 9830.400: 94.5312% ( 17) 00:08:39.766 9830.400 - 9880.812: 94.6365% ( 19) 00:08:39.766 9880.812 - 9931.225: 94.7363% ( 18) 00:08:39.766 9931.225 - 9981.637: 94.9246% ( 34) 00:08:39.766 9981.637 - 10032.049: 95.0632% ( 25) 00:08:39.766 10032.049 - 10082.462: 95.2460% ( 33) 00:08:39.766 10082.462 - 10132.874: 95.4178% ( 31) 00:08:39.766 10132.874 - 10183.286: 95.5729% ( 28) 00:08:39.766 10183.286 - 10233.698: 95.7170% ( 26) 00:08:39.766 10233.698 - 10284.111: 95.8500% ( 24) 00:08:39.766 10284.111 - 10334.523: 96.0605% ( 38) 00:08:39.766 10334.523 - 10384.935: 96.1769% ( 21) 00:08:39.766 10384.935 - 10435.348: 96.3375% ( 29) 00:08:39.766 10435.348 - 10485.760: 96.4927% ( 28) 00:08:39.766 10485.760 - 10536.172: 96.6257% ( 24) 00:08:39.766 10536.172 - 10586.585: 96.7974% ( 31) 00:08:39.766 10586.585 - 10636.997: 96.9470% ( 27) 00:08:39.766 10636.997 - 10687.409: 97.0745% ( 23) 00:08:39.766 10687.409 - 10737.822: 97.2130% ( 25) 00:08:39.766 10737.822 - 10788.234: 97.3127% ( 18) 00:08:39.766 10788.234 - 10838.646: 97.3958% ( 15) 00:08:39.766 10838.646 - 10889.058: 97.4679% ( 13) 00:08:39.766 10889.058 - 10939.471: 97.5454% ( 14) 00:08:39.766 10939.471 - 10989.883: 97.6008% ( 10) 00:08:39.766 10989.883 - 11040.295: 97.6507% ( 9) 00:08:39.766 11040.295 - 11090.708: 97.6950% ( 8) 00:08:39.766 11090.708 - 11141.120: 97.7338% ( 7) 00:08:39.766 11141.120 - 11191.532: 97.7781% ( 8) 00:08:39.766 11191.532 - 11241.945: 97.8225% ( 8) 00:08:39.766 11241.945 - 11292.357: 97.8613% ( 7) 00:08:39.766 11292.357 - 11342.769: 97.9056% ( 8) 00:08:39.766 11342.769 - 11393.182: 97.9388% ( 6) 00:08:39.766 11393.182 - 11443.594: 97.9610% ( 4) 00:08:39.766 11443.594 - 11494.006: 97.9776% ( 3) 00:08:39.766 11494.006 - 11544.418: 97.9998% ( 4) 00:08:39.766 11544.418 - 11594.831: 98.0164% ( 3) 00:08:39.766 11594.831 - 11645.243: 98.0386% ( 4) 00:08:39.766 11645.243 - 11695.655: 98.0607% ( 4) 00:08:39.766 11695.655 - 11746.068: 98.0773% ( 3) 00:08:39.766 11746.068 - 11796.480: 98.0995% ( 4) 00:08:39.766 11796.480 - 11846.892: 98.1161% ( 3) 00:08:39.766 11846.892 - 11897.305: 98.1383% ( 4) 00:08:39.766 11897.305 - 11947.717: 98.1549% ( 3) 00:08:39.766 11947.717 - 11998.129: 98.1937% ( 7) 00:08:39.766 11998.129 - 12048.542: 98.2325% ( 7) 00:08:39.766 12048.542 - 12098.954: 98.2713% ( 7) 00:08:39.766 12098.954 - 12149.366: 98.3045% ( 6) 00:08:39.766 12149.366 - 12199.778: 98.3267% ( 4) 00:08:39.766 12199.778 - 12250.191: 98.3433% ( 3) 00:08:39.766 12250.191 - 12300.603: 98.3655% ( 4) 00:08:39.766 12300.603 - 12351.015: 98.3821% ( 3) 00:08:39.766 12351.015 - 12401.428: 98.4043% ( 4) 00:08:39.766 12401.428 - 12451.840: 98.4209% ( 3) 00:08:39.766 12451.840 - 12502.252: 98.4430% ( 4) 00:08:39.766 12502.252 - 12552.665: 98.4597% ( 3) 00:08:39.766 12552.665 - 12603.077: 98.4763% ( 3) 00:08:39.766 12603.077 - 12653.489: 98.4929% ( 3) 00:08:39.766 12653.489 - 12703.902: 98.5095% ( 3) 00:08:39.766 12703.902 - 12754.314: 98.5317% ( 4) 00:08:39.766 12754.314 - 12804.726: 98.5483% ( 3) 00:08:39.766 12804.726 - 12855.138: 98.5649% ( 3) 00:08:39.766 12855.138 - 12905.551: 98.5816% ( 3) 00:08:39.766 13208.025 - 13308.849: 98.6093% ( 5) 00:08:39.766 13308.849 - 13409.674: 98.6370% ( 5) 00:08:39.766 13409.674 - 13510.498: 98.6647% ( 5) 00:08:39.766 13510.498 - 13611.323: 98.6979% ( 6) 00:08:39.766 13611.323 - 13712.148: 98.7312% ( 6) 00:08:39.766 13712.148 - 13812.972: 98.7589% ( 5) 00:08:39.766 13812.972 - 13913.797: 98.7977% ( 7) 00:08:39.766 13913.797 - 14014.622: 98.8309% ( 6) 00:08:39.766 14014.622 - 14115.446: 98.8752% ( 8) 00:08:39.766 14115.446 - 14216.271: 98.9916% ( 21) 00:08:39.766 14216.271 - 14317.095: 99.0747% ( 15) 00:08:39.766 14317.095 - 14417.920: 99.0913% ( 3) 00:08:39.766 14417.920 - 14518.745: 99.1190% ( 5) 00:08:39.766 14518.745 - 14619.569: 99.1467% ( 5) 00:08:39.766 14619.569 - 14720.394: 99.1800% ( 6) 00:08:39.766 14720.394 - 14821.218: 99.2188% ( 7) 00:08:39.766 14821.218 - 14922.043: 99.2575% ( 7) 00:08:39.766 14922.043 - 15022.868: 99.2852% ( 5) 00:08:39.766 15022.868 - 15123.692: 99.2908% ( 1) 00:08:39.766 22383.065 - 22483.889: 99.3185% ( 5) 00:08:39.766 22483.889 - 22584.714: 99.3517% ( 6) 00:08:39.766 22584.714 - 22685.538: 99.3905% ( 7) 00:08:39.766 22685.538 - 22786.363: 99.4238% ( 6) 00:08:39.766 22786.363 - 22887.188: 99.4625% ( 7) 00:08:39.766 22887.188 - 22988.012: 99.4958% ( 6) 00:08:39.766 22988.012 - 23088.837: 99.5346% ( 7) 00:08:39.766 23088.837 - 23189.662: 99.5734% ( 7) 00:08:39.766 23189.662 - 23290.486: 99.6121% ( 7) 00:08:39.766 23290.486 - 23391.311: 99.6454% ( 6) 00:08:39.766 30045.735 - 30247.385: 99.6620% ( 3) 00:08:39.766 30247.385 - 30449.034: 99.7285% ( 12) 00:08:39.766 30449.034 - 30650.683: 99.7950% ( 12) 00:08:39.766 30650.683 - 30852.332: 99.8559% ( 11) 00:08:39.766 30852.332 - 31053.982: 99.9113% ( 10) 00:08:39.766 31053.982 - 31255.631: 99.9834% ( 13) 00:08:39.766 31255.631 - 31457.280: 100.0000% ( 3) 00:08:39.766 00:08:39.766 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:39.766 ============================================================================== 00:08:39.766 Range in us Cumulative IO count 00:08:39.766 3276.800 - 3302.006: 0.0221% ( 4) 00:08:39.766 3302.006 - 3327.212: 0.0386% ( 3) 00:08:39.766 3327.212 - 3352.418: 0.0497% ( 2) 00:08:39.766 3352.418 - 3377.625: 0.0552% ( 1) 00:08:39.766 3377.625 - 3402.831: 0.0718% ( 3) 00:08:39.766 3402.831 - 3428.037: 0.0828% ( 2) 00:08:39.766 3428.037 - 3453.243: 0.0939% ( 2) 00:08:39.766 3453.243 - 3478.449: 0.1049% ( 2) 00:08:39.766 3478.449 - 3503.655: 0.1215% ( 3) 00:08:39.766 3503.655 - 3528.862: 0.1270% ( 1) 00:08:39.766 3528.862 - 3554.068: 0.1325% ( 1) 00:08:39.766 3554.068 - 3579.274: 0.1436% ( 2) 00:08:39.766 3579.274 - 3604.480: 0.1546% ( 2) 00:08:39.766 3604.480 - 3629.686: 0.1656% ( 2) 00:08:39.766 3629.686 - 3654.892: 0.1767% ( 2) 00:08:39.766 3654.892 - 3680.098: 0.1932% ( 3) 00:08:39.766 3680.098 - 3705.305: 0.2043% ( 2) 00:08:39.766 3705.305 - 3730.511: 0.2208% ( 3) 00:08:39.766 3730.511 - 3755.717: 0.2264% ( 1) 00:08:39.766 3755.717 - 3780.923: 0.2374% ( 2) 00:08:39.766 3780.923 - 3806.129: 0.2540% ( 3) 00:08:39.766 3806.129 - 3831.335: 0.2650% ( 2) 00:08:39.766 3831.335 - 3856.542: 0.2816% ( 3) 00:08:39.766 3856.542 - 3881.748: 0.2926% ( 2) 00:08:39.766 3881.748 - 3906.954: 0.3092% ( 3) 00:08:39.766 3906.954 - 3932.160: 0.3202% ( 2) 00:08:39.766 3932.160 - 3957.366: 0.3313% ( 2) 00:08:39.766 3957.366 - 3982.572: 0.3423% ( 2) 00:08:39.766 3982.572 - 4007.778: 0.3534% ( 2) 00:08:39.766 4889.994 - 4915.200: 0.3754% ( 4) 00:08:39.766 4915.200 - 4940.406: 0.3920% ( 3) 00:08:39.766 4940.406 - 4965.612: 0.3975% ( 1) 00:08:39.766 4965.612 - 4990.818: 0.4086% ( 2) 00:08:39.766 4990.818 - 5016.025: 0.4251% ( 3) 00:08:39.766 5016.025 - 5041.231: 0.4362% ( 2) 00:08:39.766 5041.231 - 5066.437: 0.4472% ( 2) 00:08:39.766 5066.437 - 5091.643: 0.4638% ( 3) 00:08:39.766 5091.643 - 5116.849: 0.4748% ( 2) 00:08:39.766 5116.849 - 5142.055: 0.4859% ( 2) 00:08:39.766 5142.055 - 5167.262: 0.5024% ( 3) 00:08:39.766 5167.262 - 5192.468: 0.5135% ( 2) 00:08:39.766 5192.468 - 5217.674: 0.5245% ( 2) 00:08:39.766 5217.674 - 5242.880: 0.5356% ( 2) 00:08:39.766 5242.880 - 5268.086: 0.5466% ( 2) 00:08:39.766 5268.086 - 5293.292: 0.5632% ( 3) 00:08:39.766 5293.292 - 5318.498: 0.5742% ( 2) 00:08:39.766 5318.498 - 5343.705: 0.5852% ( 2) 00:08:39.766 5343.705 - 5368.911: 0.6018% ( 3) 00:08:39.766 5368.911 - 5394.117: 0.6129% ( 2) 00:08:39.766 5394.117 - 5419.323: 0.6239% ( 2) 00:08:39.766 5419.323 - 5444.529: 0.6405% ( 3) 00:08:39.766 5444.529 - 5469.735: 0.6515% ( 2) 00:08:39.766 5469.735 - 5494.942: 0.6681% ( 3) 00:08:39.766 5494.942 - 5520.148: 0.6791% ( 2) 00:08:39.766 5520.148 - 5545.354: 0.6902% ( 2) 00:08:39.766 5545.354 - 5570.560: 0.7012% ( 2) 00:08:39.766 5570.560 - 5595.766: 0.7067% ( 1) 00:08:39.766 6024.271 - 6049.477: 0.8227% ( 21) 00:08:39.766 6049.477 - 6074.683: 0.8723% ( 9) 00:08:39.766 6074.683 - 6099.889: 1.0711% ( 36) 00:08:39.766 6099.889 - 6125.095: 1.5294% ( 83) 00:08:39.766 6125.095 - 6150.302: 2.4404% ( 165) 00:08:39.766 6150.302 - 6175.508: 4.1575% ( 311) 00:08:39.766 6175.508 - 6200.714: 6.2500% ( 379) 00:08:39.766 6200.714 - 6225.920: 8.2984% ( 371) 00:08:39.766 6225.920 - 6251.126: 10.6449% ( 425) 00:08:39.766 6251.126 - 6276.332: 13.0466% ( 435) 00:08:39.766 6276.332 - 6301.538: 15.4097% ( 428) 00:08:39.766 6301.538 - 6326.745: 17.8666% ( 445) 00:08:39.766 6326.745 - 6351.951: 20.2628% ( 434) 00:08:39.766 6351.951 - 6377.157: 22.6038% ( 424) 00:08:39.766 6377.157 - 6402.363: 24.9337% ( 422) 00:08:39.766 6402.363 - 6427.569: 27.4183% ( 450) 00:08:39.766 6427.569 - 6452.775: 29.9415% ( 457) 00:08:39.766 6452.775 - 6503.188: 34.9161% ( 901) 00:08:39.766 6503.188 - 6553.600: 39.8907% ( 901) 00:08:39.766 6553.600 - 6604.012: 44.8211% ( 893) 00:08:39.766 6604.012 - 6654.425: 49.8288% ( 907) 00:08:39.766 6654.425 - 6704.837: 54.9636% ( 930) 00:08:39.766 6704.837 - 6755.249: 60.0541% ( 922) 00:08:39.766 6755.249 - 6805.662: 65.0894% ( 912) 00:08:39.766 6805.662 - 6856.074: 70.1469% ( 916) 00:08:39.766 6856.074 - 6906.486: 75.0773% ( 893) 00:08:39.766 6906.486 - 6956.898: 79.3397% ( 772) 00:08:39.766 6956.898 - 7007.311: 82.3101% ( 538) 00:08:39.766 7007.311 - 7057.723: 84.1707% ( 337) 00:08:39.766 7057.723 - 7108.135: 85.4130% ( 225) 00:08:39.766 7108.135 - 7158.548: 86.2356% ( 149) 00:08:39.767 7158.548 - 7208.960: 86.7602% ( 95) 00:08:39.767 7208.960 - 7259.372: 87.1632% ( 73) 00:08:39.767 7259.372 - 7309.785: 87.4503% ( 52) 00:08:39.767 7309.785 - 7360.197: 87.6932% ( 44) 00:08:39.767 7360.197 - 7410.609: 87.9362% ( 44) 00:08:39.767 7410.609 - 7461.022: 88.1460% ( 38) 00:08:39.767 7461.022 - 7511.434: 88.3227% ( 32) 00:08:39.767 7511.434 - 7561.846: 88.4662% ( 26) 00:08:39.767 7561.846 - 7612.258: 88.6153% ( 27) 00:08:39.767 7612.258 - 7662.671: 88.7588% ( 26) 00:08:39.767 7662.671 - 7713.083: 88.8748% ( 21) 00:08:39.767 7713.083 - 7763.495: 88.9797% ( 19) 00:08:39.767 7763.495 - 7813.908: 89.0956% ( 21) 00:08:39.767 7813.908 - 7864.320: 89.2005% ( 19) 00:08:39.767 7864.320 - 7914.732: 89.2613% ( 11) 00:08:39.767 7914.732 - 7965.145: 89.3662% ( 19) 00:08:39.767 7965.145 - 8015.557: 89.4932% ( 23) 00:08:39.767 8015.557 - 8065.969: 89.6312% ( 25) 00:08:39.767 8065.969 - 8116.382: 89.7361% ( 19) 00:08:39.767 8116.382 - 8166.794: 89.8520% ( 21) 00:08:39.767 8166.794 - 8217.206: 89.9845% ( 24) 00:08:39.767 8217.206 - 8267.618: 90.1336% ( 27) 00:08:39.767 8267.618 - 8318.031: 90.2551% ( 22) 00:08:39.767 8318.031 - 8368.443: 90.4042% ( 27) 00:08:39.767 8368.443 - 8418.855: 90.5698% ( 30) 00:08:39.767 8418.855 - 8469.268: 90.6913% ( 22) 00:08:39.767 8469.268 - 8519.680: 90.8293% ( 25) 00:08:39.767 8519.680 - 8570.092: 90.9618% ( 24) 00:08:39.767 8570.092 - 8620.505: 91.0777% ( 21) 00:08:39.767 8620.505 - 8670.917: 91.2047% ( 23) 00:08:39.767 8670.917 - 8721.329: 91.3317% ( 23) 00:08:39.767 8721.329 - 8771.742: 91.4642% ( 24) 00:08:39.767 8771.742 - 8822.154: 91.6023% ( 25) 00:08:39.767 8822.154 - 8872.566: 91.7292% ( 23) 00:08:39.767 8872.566 - 8922.978: 91.8728% ( 26) 00:08:39.767 8922.978 - 8973.391: 91.9943% ( 22) 00:08:39.767 8973.391 - 9023.803: 92.1047% ( 20) 00:08:39.767 9023.803 - 9074.215: 92.2041% ( 18) 00:08:39.767 9074.215 - 9124.628: 92.3255% ( 22) 00:08:39.767 9124.628 - 9175.040: 92.4580% ( 24) 00:08:39.767 9175.040 - 9225.452: 92.5464% ( 16) 00:08:39.767 9225.452 - 9275.865: 92.6513% ( 19) 00:08:39.767 9275.865 - 9326.277: 92.7451% ( 17) 00:08:39.767 9326.277 - 9376.689: 92.8169% ( 13) 00:08:39.767 9376.689 - 9427.102: 92.8942% ( 14) 00:08:39.767 9427.102 - 9477.514: 92.9605% ( 12) 00:08:39.767 9477.514 - 9527.926: 93.0157% ( 10) 00:08:39.767 9527.926 - 9578.338: 93.1151% ( 18) 00:08:39.767 9578.338 - 9628.751: 93.2476% ( 24) 00:08:39.767 9628.751 - 9679.163: 93.3911% ( 26) 00:08:39.767 9679.163 - 9729.575: 93.5457% ( 28) 00:08:39.767 9729.575 - 9779.988: 93.6561% ( 20) 00:08:39.767 9779.988 - 9830.400: 93.7886% ( 24) 00:08:39.767 9830.400 - 9880.812: 93.9543% ( 30) 00:08:39.767 9880.812 - 9931.225: 94.1254% ( 31) 00:08:39.767 9931.225 - 9981.637: 94.2856% ( 29) 00:08:39.767 9981.637 - 10032.049: 94.5009% ( 39) 00:08:39.767 10032.049 - 10082.462: 94.7107% ( 38) 00:08:39.767 10082.462 - 10132.874: 94.8929% ( 33) 00:08:39.767 10132.874 - 10183.286: 95.0806% ( 34) 00:08:39.767 10183.286 - 10233.698: 95.2904% ( 38) 00:08:39.767 10233.698 - 10284.111: 95.4781% ( 34) 00:08:39.767 10284.111 - 10334.523: 95.6879% ( 38) 00:08:39.767 10334.523 - 10384.935: 95.8757% ( 34) 00:08:39.767 10384.935 - 10435.348: 96.0413% ( 30) 00:08:39.767 10435.348 - 10485.760: 96.2125% ( 31) 00:08:39.767 10485.760 - 10536.172: 96.3726% ( 29) 00:08:39.767 10536.172 - 10586.585: 96.5327% ( 29) 00:08:39.767 10586.585 - 10636.997: 96.6542% ( 22) 00:08:39.767 10636.997 - 10687.409: 96.7646% ( 20) 00:08:39.767 10687.409 - 10737.822: 96.8750% ( 20) 00:08:39.767 10737.822 - 10788.234: 96.9799% ( 19) 00:08:39.767 10788.234 - 10838.646: 97.0627% ( 15) 00:08:39.767 10838.646 - 10889.058: 97.1400% ( 14) 00:08:39.767 10889.058 - 10939.471: 97.1952% ( 10) 00:08:39.767 10939.471 - 10989.883: 97.2284% ( 6) 00:08:39.767 10989.883 - 11040.295: 97.2560% ( 5) 00:08:39.767 11040.295 - 11090.708: 97.3057% ( 9) 00:08:39.767 11090.708 - 11141.120: 97.3388% ( 6) 00:08:39.767 11141.120 - 11191.532: 97.3774% ( 7) 00:08:39.767 11191.532 - 11241.945: 97.4106% ( 6) 00:08:39.767 11241.945 - 11292.357: 97.4547% ( 8) 00:08:39.767 11292.357 - 11342.769: 97.4934% ( 7) 00:08:39.767 11342.769 - 11393.182: 97.5431% ( 9) 00:08:39.767 11393.182 - 11443.594: 97.5762% ( 6) 00:08:39.767 11443.594 - 11494.006: 97.6093% ( 6) 00:08:39.767 11494.006 - 11544.418: 97.6480% ( 7) 00:08:39.767 11544.418 - 11594.831: 97.6811% ( 6) 00:08:39.767 11594.831 - 11645.243: 97.7308% ( 9) 00:08:39.767 11645.243 - 11695.655: 97.7805% ( 9) 00:08:39.767 11695.655 - 11746.068: 97.8412% ( 11) 00:08:39.767 11746.068 - 11796.480: 97.8854% ( 8) 00:08:39.767 11796.480 - 11846.892: 97.9240% ( 7) 00:08:39.767 11846.892 - 11897.305: 97.9627% ( 7) 00:08:39.767 11897.305 - 11947.717: 97.9903% ( 5) 00:08:39.767 11947.717 - 11998.129: 98.0289% ( 7) 00:08:39.767 11998.129 - 12048.542: 98.0455% ( 3) 00:08:39.767 12048.542 - 12098.954: 98.0676% ( 4) 00:08:39.767 12098.954 - 12149.366: 98.0841% ( 3) 00:08:39.767 12149.366 - 12199.778: 98.1062% ( 4) 00:08:39.767 12199.778 - 12250.191: 98.1228% ( 3) 00:08:39.767 12250.191 - 12300.603: 98.1394% ( 3) 00:08:39.767 12300.603 - 12351.015: 98.1614% ( 4) 00:08:39.767 12351.015 - 12401.428: 98.1780% ( 3) 00:08:39.767 12401.428 - 12451.840: 98.2001% ( 4) 00:08:39.767 12451.840 - 12502.252: 98.2167% ( 3) 00:08:39.767 12502.252 - 12552.665: 98.2553% ( 7) 00:08:39.767 12552.665 - 12603.077: 98.2719% ( 3) 00:08:39.767 12603.077 - 12653.489: 98.2884% ( 3) 00:08:39.767 12653.489 - 12703.902: 98.3050% ( 3) 00:08:39.767 12703.902 - 12754.314: 98.3160% ( 2) 00:08:39.767 12754.314 - 12804.726: 98.3326% ( 3) 00:08:39.767 12804.726 - 12855.138: 98.3492% ( 3) 00:08:39.767 12855.138 - 12905.551: 98.3657% ( 3) 00:08:39.767 12905.551 - 13006.375: 98.3933% ( 5) 00:08:39.767 13006.375 - 13107.200: 98.4265% ( 6) 00:08:39.767 13107.200 - 13208.025: 98.4651% ( 7) 00:08:39.767 13208.025 - 13308.849: 98.5038% ( 7) 00:08:39.767 13308.849 - 13409.674: 98.5369% ( 6) 00:08:39.767 13409.674 - 13510.498: 98.5700% ( 6) 00:08:39.767 13510.498 - 13611.323: 98.5866% ( 3) 00:08:39.767 13712.148 - 13812.972: 98.6031% ( 3) 00:08:39.767 13812.972 - 13913.797: 98.6418% ( 7) 00:08:39.767 13913.797 - 14014.622: 98.6804% ( 7) 00:08:39.767 14014.622 - 14115.446: 98.7246% ( 8) 00:08:39.767 14115.446 - 14216.271: 98.7633% ( 7) 00:08:39.767 14216.271 - 14317.095: 98.7964% ( 6) 00:08:39.767 14317.095 - 14417.920: 98.8682% ( 13) 00:08:39.767 14417.920 - 14518.745: 98.9565% ( 16) 00:08:39.767 14518.745 - 14619.569: 99.0227% ( 12) 00:08:39.767 14619.569 - 14720.394: 99.0780% ( 10) 00:08:39.767 14720.394 - 14821.218: 99.1332% ( 10) 00:08:39.767 14821.218 - 14922.043: 99.1608% ( 5) 00:08:39.767 14922.043 - 15022.868: 99.1994% ( 7) 00:08:39.767 15022.868 - 15123.692: 99.2381% ( 7) 00:08:39.767 15123.692 - 15224.517: 99.3043% ( 12) 00:08:39.767 15224.517 - 15325.342: 99.3706% ( 12) 00:08:39.767 15325.342 - 15426.166: 99.4037% ( 6) 00:08:39.767 15426.166 - 15526.991: 99.4424% ( 7) 00:08:39.767 15526.991 - 15627.815: 99.4755% ( 6) 00:08:39.767 15627.815 - 15728.640: 99.5086% ( 6) 00:08:39.767 15728.640 - 15829.465: 99.5473% ( 7) 00:08:39.767 15829.465 - 15930.289: 99.5859% ( 7) 00:08:39.767 15930.289 - 16031.114: 99.6246% ( 7) 00:08:39.767 16031.114 - 16131.938: 99.6466% ( 4) 00:08:39.767 22988.012 - 23088.837: 99.6742% ( 5) 00:08:39.767 23088.837 - 23189.662: 99.7019% ( 5) 00:08:39.767 23189.662 - 23290.486: 99.7405% ( 7) 00:08:39.767 23290.486 - 23391.311: 99.7792% ( 7) 00:08:39.767 23391.311 - 23492.135: 99.8123% ( 6) 00:08:39.767 23492.135 - 23592.960: 99.8509% ( 7) 00:08:39.767 23592.960 - 23693.785: 99.8896% ( 7) 00:08:39.767 23693.785 - 23794.609: 99.9227% ( 6) 00:08:39.767 23794.609 - 23895.434: 99.9614% ( 7) 00:08:39.767 23895.434 - 23996.258: 100.0000% ( 7) 00:08:39.767 00:08:39.767 17:42:59 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:40.703 Initializing NVMe Controllers 00:08:40.703 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.703 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:40.703 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:40.703 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:40.703 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:40.703 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:40.703 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:40.703 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:40.703 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:40.703 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:40.703 Initialization complete. Launching workers. 00:08:40.703 ======================================================== 00:08:40.703 Latency(us) 00:08:40.703 Device Information : IOPS MiB/s Average min max 00:08:40.703 PCIE (0000:00:10.0) NSID 1 from core 0: 16739.19 196.16 7648.18 4639.48 24654.37 00:08:40.703 PCIE (0000:00:11.0) NSID 1 from core 0: 16739.19 196.16 7641.89 4591.11 23977.07 00:08:40.703 PCIE (0000:00:13.0) NSID 1 from core 0: 16739.19 196.16 7635.50 4220.61 23829.28 00:08:40.703 PCIE (0000:00:12.0) NSID 1 from core 0: 16739.19 196.16 7629.11 3847.22 23371.31 00:08:40.703 PCIE (0000:00:12.0) NSID 2 from core 0: 16739.19 196.16 7622.72 3681.83 22429.22 00:08:40.703 PCIE (0000:00:12.0) NSID 3 from core 0: 16739.19 196.16 7616.53 3286.17 21446.34 00:08:40.703 ======================================================== 00:08:40.703 Total : 100435.15 1176.97 7632.32 3286.17 24654.37 00:08:40.703 00:08:40.703 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:40.703 ================================================================================= 00:08:40.703 1.00000% : 5847.828us 00:08:40.703 10.00000% : 6377.157us 00:08:40.703 25.00000% : 6654.425us 00:08:40.703 50.00000% : 7158.548us 00:08:40.703 75.00000% : 8065.969us 00:08:40.703 90.00000% : 9326.277us 00:08:40.703 95.00000% : 10788.234us 00:08:40.703 98.00000% : 12502.252us 00:08:40.703 99.00000% : 14216.271us 00:08:40.703 99.50000% : 18350.080us 00:08:40.703 99.90000% : 24097.083us 00:08:40.703 99.99000% : 24601.206us 00:08:40.703 99.99900% : 24702.031us 00:08:40.703 99.99990% : 24702.031us 00:08:40.703 99.99999% : 24702.031us 00:08:40.703 00:08:40.703 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:40.703 ================================================================================= 00:08:40.703 1.00000% : 5948.652us 00:08:40.703 10.00000% : 6452.775us 00:08:40.703 25.00000% : 6704.837us 00:08:40.703 50.00000% : 7108.135us 00:08:40.703 75.00000% : 8065.969us 00:08:40.703 90.00000% : 9275.865us 00:08:40.703 95.00000% : 10838.646us 00:08:40.703 98.00000% : 12703.902us 00:08:40.703 99.00000% : 15022.868us 00:08:40.703 99.50000% : 18148.431us 00:08:40.703 99.90000% : 23693.785us 00:08:40.703 99.99000% : 23996.258us 00:08:40.703 99.99900% : 23996.258us 00:08:40.703 99.99990% : 23996.258us 00:08:40.703 99.99999% : 23996.258us 00:08:40.703 00:08:40.703 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:40.703 ================================================================================= 00:08:40.703 1.00000% : 5898.240us 00:08:40.703 10.00000% : 6402.363us 00:08:40.703 25.00000% : 6654.425us 00:08:40.703 50.00000% : 7108.135us 00:08:40.703 75.00000% : 8065.969us 00:08:40.703 90.00000% : 9275.865us 00:08:40.703 95.00000% : 10889.058us 00:08:40.703 98.00000% : 12250.191us 00:08:40.703 99.00000% : 14317.095us 00:08:40.703 99.50000% : 18148.431us 00:08:40.703 99.90000% : 23592.960us 00:08:40.703 99.99000% : 23895.434us 00:08:40.703 99.99900% : 23895.434us 00:08:40.703 99.99990% : 23895.434us 00:08:40.703 99.99999% : 23895.434us 00:08:40.703 00:08:40.703 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:40.703 ================================================================================= 00:08:40.703 1.00000% : 5847.828us 00:08:40.703 10.00000% : 6402.363us 00:08:40.703 25.00000% : 6654.425us 00:08:40.703 50.00000% : 7108.135us 00:08:40.703 75.00000% : 8116.382us 00:08:40.703 90.00000% : 9275.865us 00:08:40.703 95.00000% : 10788.234us 00:08:40.703 98.00000% : 12451.840us 00:08:40.703 99.00000% : 13611.323us 00:08:40.703 99.50000% : 17644.308us 00:08:40.703 99.90000% : 23088.837us 00:08:40.703 99.99000% : 23391.311us 00:08:40.703 99.99900% : 23391.311us 00:08:40.703 99.99990% : 23391.311us 00:08:40.703 99.99999% : 23391.311us 00:08:40.703 00:08:40.703 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:40.703 ================================================================================= 00:08:40.703 1.00000% : 5797.415us 00:08:40.703 10.00000% : 6402.363us 00:08:40.703 25.00000% : 6654.425us 00:08:40.703 50.00000% : 7108.135us 00:08:40.703 75.00000% : 8065.969us 00:08:40.703 90.00000% : 9427.102us 00:08:40.703 95.00000% : 10838.646us 00:08:40.703 98.00000% : 12804.726us 00:08:40.703 99.00000% : 13308.849us 00:08:40.703 99.50000% : 17845.957us 00:08:40.703 99.90000% : 22080.591us 00:08:40.703 99.99000% : 22483.889us 00:08:40.703 99.99900% : 22483.889us 00:08:40.703 99.99990% : 22483.889us 00:08:40.703 99.99999% : 22483.889us 00:08:40.703 00:08:40.703 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:40.703 ================================================================================= 00:08:40.703 1.00000% : 5797.415us 00:08:40.703 10.00000% : 6427.569us 00:08:40.703 25.00000% : 6704.837us 00:08:40.703 50.00000% : 7108.135us 00:08:40.703 75.00000% : 8065.969us 00:08:40.703 90.00000% : 9326.277us 00:08:40.703 95.00000% : 10788.234us 00:08:40.703 98.00000% : 12703.902us 00:08:40.703 99.00000% : 13913.797us 00:08:40.703 99.50000% : 17442.658us 00:08:40.703 99.90000% : 21273.994us 00:08:40.703 99.99000% : 21475.643us 00:08:40.703 99.99900% : 21475.643us 00:08:40.703 99.99990% : 21475.643us 00:08:40.703 99.99999% : 21475.643us 00:08:40.703 00:08:40.703 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:40.703 ============================================================================== 00:08:40.703 Range in us Cumulative IO count 00:08:40.703 4637.932 - 4663.138: 0.0119% ( 2) 00:08:40.703 4663.138 - 4688.345: 0.0239% ( 2) 00:08:40.703 4688.345 - 4713.551: 0.0358% ( 2) 00:08:40.703 4713.551 - 4738.757: 0.0477% ( 2) 00:08:40.703 4738.757 - 4763.963: 0.0596% ( 2) 00:08:40.703 4763.963 - 4789.169: 0.0716% ( 2) 00:08:40.703 4789.169 - 4814.375: 0.0775% ( 1) 00:08:40.703 4814.375 - 4839.582: 0.0895% ( 2) 00:08:40.703 4839.582 - 4864.788: 0.1014% ( 2) 00:08:40.703 4864.788 - 4889.994: 0.1252% ( 4) 00:08:40.703 4889.994 - 4915.200: 0.1491% ( 4) 00:08:40.703 4915.200 - 4940.406: 0.1551% ( 1) 00:08:40.703 4940.406 - 4965.612: 0.1670% ( 2) 00:08:40.703 4965.612 - 4990.818: 0.1729% ( 1) 00:08:40.703 4990.818 - 5016.025: 0.1789% ( 1) 00:08:40.703 5016.025 - 5041.231: 0.1908% ( 2) 00:08:40.703 5041.231 - 5066.437: 0.2028% ( 2) 00:08:40.703 5066.437 - 5091.643: 0.2147% ( 2) 00:08:40.703 5091.643 - 5116.849: 0.2266% ( 2) 00:08:40.703 5116.849 - 5142.055: 0.3519% ( 21) 00:08:40.703 5142.055 - 5167.262: 0.3817% ( 5) 00:08:40.703 5394.117 - 5419.323: 0.3876% ( 1) 00:08:40.703 5595.766 - 5620.972: 0.4055% ( 3) 00:08:40.703 5620.972 - 5646.178: 0.4115% ( 1) 00:08:40.703 5646.178 - 5671.385: 0.4711% ( 10) 00:08:40.703 5671.385 - 5696.591: 0.4890% ( 3) 00:08:40.703 5696.591 - 5721.797: 0.5248% ( 6) 00:08:40.703 5721.797 - 5747.003: 0.5785% ( 9) 00:08:40.703 5747.003 - 5772.209: 0.6679% ( 15) 00:08:40.703 5772.209 - 5797.415: 0.7991% ( 22) 00:08:40.704 5797.415 - 5822.622: 0.9482% ( 25) 00:08:40.704 5822.622 - 5847.828: 1.0794% ( 22) 00:08:40.704 5847.828 - 5873.034: 1.2882% ( 35) 00:08:40.704 5873.034 - 5898.240: 1.4611% ( 29) 00:08:40.704 5898.240 - 5923.446: 1.6162% ( 26) 00:08:40.704 5923.446 - 5948.652: 1.8070% ( 32) 00:08:40.704 5948.652 - 5973.858: 1.9979% ( 32) 00:08:40.704 5973.858 - 5999.065: 2.2066% ( 35) 00:08:40.704 5999.065 - 6024.271: 2.4690% ( 44) 00:08:40.704 6024.271 - 6049.477: 2.7612% ( 49) 00:08:40.704 6049.477 - 6074.683: 3.0654% ( 51) 00:08:40.704 6074.683 - 6099.889: 3.5007% ( 73) 00:08:40.704 6099.889 - 6125.095: 3.9361% ( 73) 00:08:40.704 6125.095 - 6150.302: 4.4788% ( 91) 00:08:40.704 6150.302 - 6175.508: 4.9797% ( 84) 00:08:40.704 6175.508 - 6200.714: 5.4747% ( 83) 00:08:40.704 6200.714 - 6225.920: 6.0949% ( 104) 00:08:40.704 6225.920 - 6251.126: 6.7032% ( 102) 00:08:40.704 6251.126 - 6276.332: 7.3891% ( 115) 00:08:40.704 6276.332 - 6301.538: 8.2180% ( 139) 00:08:40.704 6301.538 - 6326.745: 8.8681% ( 109) 00:08:40.704 6326.745 - 6351.951: 9.5360% ( 112) 00:08:40.704 6351.951 - 6377.157: 10.2338% ( 117) 00:08:40.704 6377.157 - 6402.363: 11.0448% ( 136) 00:08:40.704 6402.363 - 6427.569: 12.1243% ( 181) 00:08:40.704 6427.569 - 6452.775: 13.5019% ( 231) 00:08:40.704 6452.775 - 6503.188: 16.1438% ( 443) 00:08:40.704 6503.188 - 6553.600: 19.0422% ( 486) 00:08:40.704 6553.600 - 6604.012: 22.1553% ( 522) 00:08:40.704 6604.012 - 6654.425: 25.3220% ( 531) 00:08:40.704 6654.425 - 6704.837: 28.2741% ( 495) 00:08:40.704 6704.837 - 6755.249: 31.7271% ( 579) 00:08:40.704 6755.249 - 6805.662: 35.2040% ( 583) 00:08:40.704 6805.662 - 6856.074: 38.1679% ( 497) 00:08:40.704 6856.074 - 6906.486: 40.7979% ( 441) 00:08:40.704 6906.486 - 6956.898: 43.2073% ( 404) 00:08:40.704 6956.898 - 7007.311: 45.2708% ( 346) 00:08:40.704 7007.311 - 7057.723: 47.3581% ( 350) 00:08:40.704 7057.723 - 7108.135: 49.7972% ( 409) 00:08:40.704 7108.135 - 7158.548: 51.7474% ( 327) 00:08:40.704 7158.548 - 7208.960: 53.3516% ( 269) 00:08:40.704 7208.960 - 7259.372: 54.9141% ( 262) 00:08:40.704 7259.372 - 7309.785: 56.1784% ( 212) 00:08:40.704 7309.785 - 7360.197: 57.6455% ( 246) 00:08:40.704 7360.197 - 7410.609: 59.0052% ( 228) 00:08:40.704 7410.609 - 7461.022: 60.1264% ( 188) 00:08:40.704 7461.022 - 7511.434: 61.1701% ( 175) 00:08:40.704 7511.434 - 7561.846: 62.2316% ( 178) 00:08:40.704 7561.846 - 7612.258: 63.3290% ( 184) 00:08:40.704 7612.258 - 7662.671: 64.4979% ( 196) 00:08:40.704 7662.671 - 7713.083: 65.9292% ( 240) 00:08:40.704 7713.083 - 7763.495: 67.2889% ( 228) 00:08:40.704 7763.495 - 7813.908: 68.5234% ( 207) 00:08:40.704 7813.908 - 7864.320: 70.0203% ( 251) 00:08:40.704 7864.320 - 7914.732: 71.3383% ( 221) 00:08:40.704 7914.732 - 7965.145: 72.5847% ( 209) 00:08:40.704 7965.145 - 8015.557: 73.8490% ( 212) 00:08:40.704 8015.557 - 8065.969: 75.0239% ( 197) 00:08:40.704 8065.969 - 8116.382: 76.1391% ( 187) 00:08:40.704 8116.382 - 8166.794: 77.2245% ( 182) 00:08:40.704 8166.794 - 8217.206: 78.1846% ( 161) 00:08:40.704 8217.206 - 8267.618: 78.9778% ( 133) 00:08:40.704 8267.618 - 8318.031: 79.9201% ( 158) 00:08:40.704 8318.031 - 8368.443: 80.9458% ( 172) 00:08:40.704 8368.443 - 8418.855: 81.7152% ( 129) 00:08:40.704 8418.855 - 8469.268: 82.6276% ( 153) 00:08:40.704 8469.268 - 8519.680: 83.4983% ( 146) 00:08:40.704 8519.680 - 8570.092: 84.2557% ( 127) 00:08:40.704 8570.092 - 8620.505: 84.8282% ( 96) 00:08:40.704 8620.505 - 8670.917: 85.4127% ( 98) 00:08:40.704 8670.917 - 8721.329: 86.0329% ( 104) 00:08:40.704 8721.329 - 8771.742: 86.5875% ( 93) 00:08:40.704 8771.742 - 8822.154: 87.1720% ( 98) 00:08:40.704 8822.154 - 8872.566: 87.5656% ( 66) 00:08:40.704 8872.566 - 8922.978: 87.9592% ( 66) 00:08:40.704 8922.978 - 8973.391: 88.2872% ( 55) 00:08:40.704 8973.391 - 9023.803: 88.5437% ( 43) 00:08:40.704 9023.803 - 9074.215: 88.8001% ( 43) 00:08:40.704 9074.215 - 9124.628: 89.0685% ( 45) 00:08:40.704 9124.628 - 9175.040: 89.3786% ( 52) 00:08:40.704 9175.040 - 9225.452: 89.6469% ( 45) 00:08:40.704 9225.452 - 9275.865: 89.9511% ( 51) 00:08:40.704 9275.865 - 9326.277: 90.2016% ( 42) 00:08:40.704 9326.277 - 9376.689: 90.3984% ( 33) 00:08:40.704 9376.689 - 9427.102: 90.5594% ( 27) 00:08:40.704 9427.102 - 9477.514: 90.7383% ( 30) 00:08:40.704 9477.514 - 9527.926: 90.9649% ( 38) 00:08:40.704 9527.926 - 9578.338: 91.1319% ( 28) 00:08:40.704 9578.338 - 9628.751: 91.2870% ( 26) 00:08:40.704 9628.751 - 9679.163: 91.4778% ( 32) 00:08:40.704 9679.163 - 9729.575: 91.6627% ( 31) 00:08:40.704 9729.575 - 9779.988: 91.8118% ( 25) 00:08:40.704 9779.988 - 9830.400: 91.9847% ( 29) 00:08:40.704 9830.400 - 9880.812: 92.1159% ( 22) 00:08:40.704 9880.812 - 9931.225: 92.2292% ( 19) 00:08:40.704 9931.225 - 9981.637: 92.4380% ( 35) 00:08:40.704 9981.637 - 10032.049: 92.5990% ( 27) 00:08:40.704 10032.049 - 10082.462: 92.7123% ( 19) 00:08:40.704 10082.462 - 10132.874: 92.8077% ( 16) 00:08:40.704 10132.874 - 10183.286: 92.9270% ( 20) 00:08:40.704 10183.286 - 10233.698: 93.1119% ( 31) 00:08:40.704 10233.698 - 10284.111: 93.2371% ( 21) 00:08:40.704 10284.111 - 10334.523: 93.3504% ( 19) 00:08:40.704 10334.523 - 10384.935: 93.4757% ( 21) 00:08:40.704 10384.935 - 10435.348: 93.7858% ( 52) 00:08:40.704 10435.348 - 10485.760: 93.9528% ( 28) 00:08:40.704 10485.760 - 10536.172: 94.0959% ( 24) 00:08:40.704 10536.172 - 10586.585: 94.3046% ( 35) 00:08:40.704 10586.585 - 10636.997: 94.4537% ( 25) 00:08:40.704 10636.997 - 10687.409: 94.6923% ( 40) 00:08:40.704 10687.409 - 10737.822: 94.8712% ( 30) 00:08:40.704 10737.822 - 10788.234: 95.0680% ( 33) 00:08:40.704 10788.234 - 10838.646: 95.2230% ( 26) 00:08:40.704 10838.646 - 10889.058: 95.3423% ( 20) 00:08:40.704 10889.058 - 10939.471: 95.4556% ( 19) 00:08:40.704 10939.471 - 10989.883: 95.5570% ( 17) 00:08:40.704 10989.883 - 11040.295: 95.6703% ( 19) 00:08:40.704 11040.295 - 11090.708: 95.7717% ( 17) 00:08:40.704 11090.708 - 11141.120: 95.8910% ( 20) 00:08:40.704 11141.120 - 11191.532: 95.9804% ( 15) 00:08:40.704 11191.532 - 11241.945: 96.0997% ( 20) 00:08:40.704 11241.945 - 11292.357: 96.1832% ( 14) 00:08:40.704 11292.357 - 11342.769: 96.3084% ( 21) 00:08:40.704 11342.769 - 11393.182: 96.3919% ( 14) 00:08:40.704 11393.182 - 11443.594: 96.4695% ( 13) 00:08:40.704 11443.594 - 11494.006: 96.5768% ( 18) 00:08:40.704 11494.006 - 11544.418: 96.6543% ( 13) 00:08:40.704 11544.418 - 11594.831: 96.7498% ( 16) 00:08:40.704 11594.831 - 11645.243: 96.8392% ( 15) 00:08:40.704 11645.243 - 11695.655: 96.9466% ( 18) 00:08:40.704 11695.655 - 11746.068: 97.0181% ( 12) 00:08:40.704 11746.068 - 11796.480: 97.0897% ( 12) 00:08:40.704 11796.480 - 11846.892: 97.1493% ( 10) 00:08:40.704 11846.892 - 11897.305: 97.2328% ( 14) 00:08:40.704 11897.305 - 11947.717: 97.3044% ( 12) 00:08:40.704 11947.717 - 11998.129: 97.3879% ( 14) 00:08:40.704 11998.129 - 12048.542: 97.4773% ( 15) 00:08:40.704 12048.542 - 12098.954: 97.5310% ( 9) 00:08:40.704 12098.954 - 12149.366: 97.5847% ( 9) 00:08:40.704 12149.366 - 12199.778: 97.6264% ( 7) 00:08:40.704 12199.778 - 12250.191: 97.6920% ( 11) 00:08:40.704 12250.191 - 12300.603: 97.7576% ( 11) 00:08:40.704 12300.603 - 12351.015: 97.8471% ( 15) 00:08:40.704 12351.015 - 12401.428: 97.9187% ( 12) 00:08:40.704 12401.428 - 12451.840: 97.9723% ( 9) 00:08:40.704 12451.840 - 12502.252: 98.0200% ( 8) 00:08:40.704 12502.252 - 12552.665: 98.0618% ( 7) 00:08:40.704 12552.665 - 12603.077: 98.1214% ( 10) 00:08:40.704 12603.077 - 12653.489: 98.1811% ( 10) 00:08:40.704 12653.489 - 12703.902: 98.2288% ( 8) 00:08:40.704 12703.902 - 12754.314: 98.2646% ( 6) 00:08:40.704 12754.314 - 12804.726: 98.2824% ( 3) 00:08:40.704 12804.726 - 12855.138: 98.3003% ( 3) 00:08:40.704 12855.138 - 12905.551: 98.3182% ( 3) 00:08:40.704 12905.551 - 13006.375: 98.3540% ( 6) 00:08:40.704 13006.375 - 13107.200: 98.3958% ( 7) 00:08:40.704 13107.200 - 13208.025: 98.4256% ( 5) 00:08:40.704 13208.025 - 13308.849: 98.4614% ( 6) 00:08:40.704 13308.849 - 13409.674: 98.4733% ( 2) 00:08:40.704 13409.674 - 13510.498: 98.5091% ( 6) 00:08:40.704 13510.498 - 13611.323: 98.6462% ( 23) 00:08:40.704 13611.323 - 13712.148: 98.7059% ( 10) 00:08:40.704 13712.148 - 13812.972: 98.7476% ( 7) 00:08:40.704 13812.972 - 13913.797: 98.8251% ( 13) 00:08:40.704 13913.797 - 14014.622: 98.8967% ( 12) 00:08:40.704 14014.622 - 14115.446: 98.9683% ( 12) 00:08:40.704 14115.446 - 14216.271: 99.0100% ( 7) 00:08:40.704 14216.271 - 14317.095: 99.0518% ( 7) 00:08:40.704 14317.095 - 14417.920: 99.0816% ( 5) 00:08:40.704 14417.920 - 14518.745: 99.1353% ( 9) 00:08:40.704 14518.745 - 14619.569: 99.1651% ( 5) 00:08:40.704 14619.569 - 14720.394: 99.1710% ( 1) 00:08:40.704 14720.394 - 14821.218: 99.1949% ( 4) 00:08:40.704 14821.218 - 14922.043: 99.2009% ( 1) 00:08:40.705 14922.043 - 15022.868: 99.2128% ( 2) 00:08:40.705 15022.868 - 15123.692: 99.2366% ( 4) 00:08:40.705 17442.658 - 17543.483: 99.2426% ( 1) 00:08:40.705 17745.132 - 17845.957: 99.2903% ( 8) 00:08:40.705 17845.957 - 17946.782: 99.4036% ( 19) 00:08:40.705 17946.782 - 18047.606: 99.4394% ( 6) 00:08:40.705 18047.606 - 18148.431: 99.4573% ( 3) 00:08:40.705 18148.431 - 18249.255: 99.4812% ( 4) 00:08:40.705 18249.255 - 18350.080: 99.5110% ( 5) 00:08:40.705 18350.080 - 18450.905: 99.5408% ( 5) 00:08:40.705 18450.905 - 18551.729: 99.5706% ( 5) 00:08:40.705 18652.554 - 18753.378: 99.5945% ( 4) 00:08:40.705 18753.378 - 18854.203: 99.6183% ( 4) 00:08:40.705 22988.012 - 23088.837: 99.6302% ( 2) 00:08:40.705 23088.837 - 23189.662: 99.6601% ( 5) 00:08:40.705 23189.662 - 23290.486: 99.6839% ( 4) 00:08:40.705 23290.486 - 23391.311: 99.7137% ( 5) 00:08:40.705 23391.311 - 23492.135: 99.7376% ( 4) 00:08:40.705 23492.135 - 23592.960: 99.7674% ( 5) 00:08:40.705 23592.960 - 23693.785: 99.7972% ( 5) 00:08:40.705 23693.785 - 23794.609: 99.8211% ( 4) 00:08:40.705 23794.609 - 23895.434: 99.8449% ( 4) 00:08:40.705 23895.434 - 23996.258: 99.8807% ( 6) 00:08:40.705 23996.258 - 24097.083: 99.9046% ( 4) 00:08:40.705 24097.083 - 24197.908: 99.9344% ( 5) 00:08:40.705 24197.908 - 24298.732: 99.9404% ( 1) 00:08:40.705 24399.557 - 24500.382: 99.9642% ( 4) 00:08:40.705 24500.382 - 24601.206: 99.9940% ( 5) 00:08:40.705 24601.206 - 24702.031: 100.0000% ( 1) 00:08:40.705 00:08:40.705 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:40.705 ============================================================================== 00:08:40.705 Range in us Cumulative IO count 00:08:40.705 4587.520 - 4612.726: 0.0060% ( 1) 00:08:40.705 4637.932 - 4663.138: 0.0775% ( 12) 00:08:40.705 4663.138 - 4688.345: 0.1133% ( 6) 00:08:40.705 4688.345 - 4713.551: 0.1610% ( 8) 00:08:40.705 4713.551 - 4738.757: 0.2445% ( 14) 00:08:40.705 4738.757 - 4763.963: 0.2684% ( 4) 00:08:40.705 4763.963 - 4789.169: 0.2803% ( 2) 00:08:40.705 4789.169 - 4814.375: 0.2922% ( 2) 00:08:40.705 4814.375 - 4839.582: 0.3042% ( 2) 00:08:40.705 4839.582 - 4864.788: 0.3161% ( 2) 00:08:40.705 4864.788 - 4889.994: 0.3280% ( 2) 00:08:40.705 4889.994 - 4915.200: 0.3340% ( 1) 00:08:40.705 4915.200 - 4940.406: 0.3459% ( 2) 00:08:40.705 4940.406 - 4965.612: 0.3578% ( 2) 00:08:40.705 4965.612 - 4990.818: 0.3698% ( 2) 00:08:40.705 4990.818 - 5016.025: 0.3817% ( 2) 00:08:40.705 5620.972 - 5646.178: 0.3936% ( 2) 00:08:40.705 5646.178 - 5671.385: 0.3996% ( 1) 00:08:40.705 5671.385 - 5696.591: 0.4055% ( 1) 00:08:40.705 5696.591 - 5721.797: 0.4115% ( 1) 00:08:40.705 5747.003 - 5772.209: 0.4175% ( 1) 00:08:40.705 5772.209 - 5797.415: 0.4234% ( 1) 00:08:40.705 5797.415 - 5822.622: 0.4592% ( 6) 00:08:40.705 5822.622 - 5847.828: 0.4890% ( 5) 00:08:40.705 5847.828 - 5873.034: 0.5606% ( 12) 00:08:40.705 5873.034 - 5898.240: 0.8170% ( 43) 00:08:40.705 5898.240 - 5923.446: 0.9721% ( 26) 00:08:40.705 5923.446 - 5948.652: 1.2166% ( 41) 00:08:40.705 5948.652 - 5973.858: 1.3717% ( 26) 00:08:40.705 5973.858 - 5999.065: 1.6877% ( 53) 00:08:40.705 5999.065 - 6024.271: 2.0456% ( 60) 00:08:40.705 6024.271 - 6049.477: 2.5286% ( 81) 00:08:40.705 6049.477 - 6074.683: 2.6896% ( 27) 00:08:40.705 6074.683 - 6099.889: 2.8924% ( 34) 00:08:40.705 6099.889 - 6125.095: 3.2741% ( 64) 00:08:40.705 6125.095 - 6150.302: 3.7631% ( 82) 00:08:40.705 6150.302 - 6175.508: 4.0196% ( 43) 00:08:40.705 6175.508 - 6200.714: 4.4788% ( 77) 00:08:40.705 6200.714 - 6225.920: 4.7412% ( 44) 00:08:40.705 6225.920 - 6251.126: 5.2898% ( 92) 00:08:40.705 6251.126 - 6276.332: 5.7371% ( 75) 00:08:40.705 6276.332 - 6301.538: 6.2023% ( 78) 00:08:40.705 6301.538 - 6326.745: 6.9716% ( 129) 00:08:40.705 6326.745 - 6351.951: 7.5740% ( 101) 00:08:40.705 6351.951 - 6377.157: 8.2121% ( 107) 00:08:40.705 6377.157 - 6402.363: 8.9933% ( 131) 00:08:40.705 6402.363 - 6427.569: 9.9117% ( 154) 00:08:40.705 6427.569 - 6452.775: 11.2059% ( 217) 00:08:40.705 6452.775 - 6503.188: 13.8478% ( 443) 00:08:40.705 6503.188 - 6553.600: 16.8535% ( 504) 00:08:40.705 6553.600 - 6604.012: 20.4377% ( 601) 00:08:40.705 6604.012 - 6654.425: 24.5348% ( 687) 00:08:40.705 6654.425 - 6704.837: 28.3993% ( 648) 00:08:40.705 6704.837 - 6755.249: 32.4905% ( 686) 00:08:40.705 6755.249 - 6805.662: 36.5339% ( 678) 00:08:40.705 6805.662 - 6856.074: 40.0286% ( 586) 00:08:40.705 6856.074 - 6906.486: 43.1238% ( 519) 00:08:40.705 6906.486 - 6956.898: 46.0043% ( 483) 00:08:40.705 6956.898 - 7007.311: 47.9008% ( 318) 00:08:40.705 7007.311 - 7057.723: 49.8807% ( 332) 00:08:40.705 7057.723 - 7108.135: 51.3657% ( 249) 00:08:40.705 7108.135 - 7158.548: 53.0534% ( 283) 00:08:40.705 7158.548 - 7208.960: 54.6636% ( 270) 00:08:40.705 7208.960 - 7259.372: 55.8385% ( 197) 00:08:40.705 7259.372 - 7309.785: 56.7927% ( 160) 00:08:40.705 7309.785 - 7360.197: 57.7171% ( 155) 00:08:40.705 7360.197 - 7410.609: 58.6772% ( 161) 00:08:40.705 7410.609 - 7461.022: 59.5002% ( 138) 00:08:40.705 7461.022 - 7511.434: 60.5976% ( 184) 00:08:40.705 7511.434 - 7561.846: 61.5339% ( 157) 00:08:40.705 7561.846 - 7612.258: 62.2793% ( 125) 00:08:40.705 7612.258 - 7662.671: 63.3528% ( 180) 00:08:40.705 7662.671 - 7713.083: 64.3667% ( 170) 00:08:40.705 7713.083 - 7763.495: 65.6906% ( 222) 00:08:40.705 7763.495 - 7813.908: 67.2292% ( 258) 00:08:40.705 7813.908 - 7864.320: 68.7023% ( 247) 00:08:40.705 7864.320 - 7914.732: 70.3125% ( 270) 00:08:40.705 7914.732 - 7965.145: 72.1613% ( 310) 00:08:40.705 7965.145 - 8015.557: 74.2009% ( 342) 00:08:40.705 8015.557 - 8065.969: 75.9661% ( 296) 00:08:40.705 8065.969 - 8116.382: 77.3139% ( 226) 00:08:40.705 8116.382 - 8166.794: 78.8228% ( 253) 00:08:40.705 8166.794 - 8217.206: 79.9022% ( 181) 00:08:40.705 8217.206 - 8267.618: 80.8266% ( 155) 00:08:40.705 8267.618 - 8318.031: 81.6913% ( 145) 00:08:40.705 8318.031 - 8368.443: 82.4368% ( 125) 00:08:40.705 8368.443 - 8418.855: 83.1286% ( 116) 00:08:40.705 8418.855 - 8469.268: 83.7607% ( 106) 00:08:40.705 8469.268 - 8519.680: 84.3094% ( 92) 00:08:40.705 8519.680 - 8570.092: 84.9058% ( 100) 00:08:40.705 8570.092 - 8620.505: 85.4067% ( 84) 00:08:40.705 8620.505 - 8670.917: 85.8361% ( 72) 00:08:40.705 8670.917 - 8721.329: 86.2595% ( 71) 00:08:40.705 8721.329 - 8771.742: 86.7188% ( 77) 00:08:40.705 8771.742 - 8822.154: 87.2197% ( 84) 00:08:40.705 8822.154 - 8872.566: 87.6312% ( 69) 00:08:40.705 8872.566 - 8922.978: 88.0427% ( 69) 00:08:40.705 8922.978 - 8973.391: 88.4005% ( 60) 00:08:40.705 8973.391 - 9023.803: 88.7047% ( 51) 00:08:40.705 9023.803 - 9074.215: 89.0208% ( 53) 00:08:40.705 9074.215 - 9124.628: 89.2653% ( 41) 00:08:40.705 9124.628 - 9175.040: 89.4859% ( 37) 00:08:40.705 9175.040 - 9225.452: 89.7304% ( 41) 00:08:40.705 9225.452 - 9275.865: 90.0346% ( 51) 00:08:40.705 9275.865 - 9326.277: 90.3447% ( 52) 00:08:40.705 9326.277 - 9376.689: 90.6011% ( 43) 00:08:40.705 9376.689 - 9427.102: 90.8576% ( 43) 00:08:40.705 9427.102 - 9477.514: 91.0842% ( 38) 00:08:40.705 9477.514 - 9527.926: 91.2810% ( 33) 00:08:40.705 9527.926 - 9578.338: 91.4599% ( 30) 00:08:40.705 9578.338 - 9628.751: 91.6627% ( 34) 00:08:40.705 9628.751 - 9679.163: 91.8595% ( 33) 00:08:40.705 9679.163 - 9729.575: 91.9847% ( 21) 00:08:40.705 9729.575 - 9779.988: 92.1398% ( 26) 00:08:40.705 9779.988 - 9830.400: 92.2770% ( 23) 00:08:40.705 9830.400 - 9880.812: 92.4320% ( 26) 00:08:40.705 9880.812 - 9931.225: 92.5632% ( 22) 00:08:40.705 9931.225 - 9981.637: 92.6885% ( 21) 00:08:40.705 9981.637 - 10032.049: 92.7958% ( 18) 00:08:40.705 10032.049 - 10082.462: 92.8435% ( 8) 00:08:40.705 10082.462 - 10132.874: 92.8853% ( 7) 00:08:40.705 10132.874 - 10183.286: 92.9568% ( 12) 00:08:40.705 10183.286 - 10233.698: 93.0403% ( 14) 00:08:40.705 10233.698 - 10284.111: 93.1178% ( 13) 00:08:40.705 10284.111 - 10334.523: 93.2252% ( 18) 00:08:40.705 10334.523 - 10384.935: 93.4220% ( 33) 00:08:40.705 10384.935 - 10435.348: 93.6188% ( 33) 00:08:40.705 10435.348 - 10485.760: 93.7977% ( 30) 00:08:40.705 10485.760 - 10536.172: 93.9349% ( 23) 00:08:40.705 10536.172 - 10586.585: 94.0840% ( 25) 00:08:40.705 10586.585 - 10636.997: 94.2510% ( 28) 00:08:40.705 10636.997 - 10687.409: 94.4239% ( 29) 00:08:40.705 10687.409 - 10737.822: 94.5849% ( 27) 00:08:40.705 10737.822 - 10788.234: 94.8652% ( 47) 00:08:40.705 10788.234 - 10838.646: 95.0024% ( 23) 00:08:40.705 10838.646 - 10889.058: 95.1873% ( 31) 00:08:40.705 10889.058 - 10939.471: 95.3602% ( 29) 00:08:40.705 10939.471 - 10989.883: 95.5689% ( 35) 00:08:40.705 10989.883 - 11040.295: 95.9089% ( 57) 00:08:40.706 11040.295 - 11090.708: 96.1176% ( 35) 00:08:40.706 11090.708 - 11141.120: 96.2607% ( 24) 00:08:40.706 11141.120 - 11191.532: 96.3800% ( 20) 00:08:40.706 11191.532 - 11241.945: 96.4516% ( 12) 00:08:40.706 11241.945 - 11292.357: 96.5291% ( 13) 00:08:40.706 11292.357 - 11342.769: 96.5708% ( 7) 00:08:40.706 11342.769 - 11393.182: 96.6007% ( 5) 00:08:40.706 11393.182 - 11443.594: 96.6245% ( 4) 00:08:40.706 11443.594 - 11494.006: 96.6961% ( 12) 00:08:40.706 11494.006 - 11544.418: 96.7259% ( 5) 00:08:40.706 11544.418 - 11594.831: 96.7677% ( 7) 00:08:40.706 11594.831 - 11645.243: 96.8094% ( 7) 00:08:40.706 11645.243 - 11695.655: 96.8452% ( 6) 00:08:40.706 11695.655 - 11746.068: 96.9167% ( 12) 00:08:40.706 11746.068 - 11796.480: 97.0002% ( 14) 00:08:40.706 11796.480 - 11846.892: 97.0778% ( 13) 00:08:40.706 11846.892 - 11897.305: 97.1792% ( 17) 00:08:40.706 11897.305 - 11947.717: 97.2507% ( 12) 00:08:40.706 11947.717 - 11998.129: 97.2984% ( 8) 00:08:40.706 11998.129 - 12048.542: 97.3581% ( 10) 00:08:40.706 12048.542 - 12098.954: 97.4177% ( 10) 00:08:40.706 12098.954 - 12149.366: 97.4773% ( 10) 00:08:40.706 12149.366 - 12199.778: 97.5429% ( 11) 00:08:40.706 12199.778 - 12250.191: 97.6562% ( 19) 00:08:40.706 12250.191 - 12300.603: 97.6801% ( 4) 00:08:40.706 12300.603 - 12351.015: 97.7099% ( 5) 00:08:40.706 12351.015 - 12401.428: 97.7576% ( 8) 00:08:40.706 12401.428 - 12451.840: 97.7934% ( 6) 00:08:40.706 12451.840 - 12502.252: 97.8411% ( 8) 00:08:40.706 12502.252 - 12552.665: 97.8709% ( 5) 00:08:40.706 12552.665 - 12603.077: 97.9127% ( 7) 00:08:40.706 12603.077 - 12653.489: 97.9962% ( 14) 00:08:40.706 12653.489 - 12703.902: 98.1035% ( 18) 00:08:40.706 12703.902 - 12754.314: 98.1512% ( 8) 00:08:40.706 12754.314 - 12804.726: 98.2049% ( 9) 00:08:40.706 12804.726 - 12855.138: 98.2765% ( 12) 00:08:40.706 12855.138 - 12905.551: 98.3540% ( 13) 00:08:40.706 12905.551 - 13006.375: 98.4554% ( 17) 00:08:40.706 13006.375 - 13107.200: 98.5448% ( 15) 00:08:40.706 13107.200 - 13208.025: 98.6582% ( 19) 00:08:40.706 13208.025 - 13308.849: 98.7536% ( 16) 00:08:40.706 13308.849 - 13409.674: 98.7774% ( 4) 00:08:40.706 13409.674 - 13510.498: 98.8013% ( 4) 00:08:40.706 13510.498 - 13611.323: 98.8251% ( 4) 00:08:40.706 13611.323 - 13712.148: 98.8430% ( 3) 00:08:40.706 13712.148 - 13812.972: 98.8550% ( 2) 00:08:40.706 14518.745 - 14619.569: 98.8609% ( 1) 00:08:40.706 14619.569 - 14720.394: 98.8907% ( 5) 00:08:40.706 14720.394 - 14821.218: 98.9146% ( 4) 00:08:40.706 14821.218 - 14922.043: 98.9504% ( 6) 00:08:40.706 14922.043 - 15022.868: 99.1830% ( 39) 00:08:40.706 15022.868 - 15123.692: 99.2247% ( 7) 00:08:40.706 15123.692 - 15224.517: 99.2366% ( 2) 00:08:40.706 17241.009 - 17341.834: 99.2545% ( 3) 00:08:40.706 17341.834 - 17442.658: 99.2844% ( 5) 00:08:40.706 17442.658 - 17543.483: 99.3201% ( 6) 00:08:40.706 17543.483 - 17644.308: 99.3500% ( 5) 00:08:40.706 17644.308 - 17745.132: 99.3798% ( 5) 00:08:40.706 17745.132 - 17845.957: 99.4156% ( 6) 00:08:40.706 17845.957 - 17946.782: 99.4394% ( 4) 00:08:40.706 17946.782 - 18047.606: 99.4752% ( 6) 00:08:40.706 18047.606 - 18148.431: 99.5050% ( 5) 00:08:40.706 18148.431 - 18249.255: 99.5348% ( 5) 00:08:40.706 18249.255 - 18350.080: 99.5646% ( 5) 00:08:40.706 18350.080 - 18450.905: 99.6004% ( 6) 00:08:40.706 18450.905 - 18551.729: 99.6183% ( 3) 00:08:40.706 22685.538 - 22786.363: 99.6243% ( 1) 00:08:40.706 22786.363 - 22887.188: 99.6541% ( 5) 00:08:40.706 22887.188 - 22988.012: 99.6899% ( 6) 00:08:40.706 22988.012 - 23088.837: 99.7197% ( 5) 00:08:40.706 23088.837 - 23189.662: 99.7555% ( 6) 00:08:40.706 23189.662 - 23290.486: 99.7793% ( 4) 00:08:40.706 23290.486 - 23391.311: 99.8092% ( 5) 00:08:40.706 23391.311 - 23492.135: 99.8449% ( 6) 00:08:40.706 23492.135 - 23592.960: 99.8748% ( 5) 00:08:40.706 23592.960 - 23693.785: 99.9046% ( 5) 00:08:40.706 23693.785 - 23794.609: 99.9404% ( 6) 00:08:40.706 23794.609 - 23895.434: 99.9702% ( 5) 00:08:40.706 23895.434 - 23996.258: 100.0000% ( 5) 00:08:40.706 00:08:40.706 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:40.706 ============================================================================== 00:08:40.706 Range in us Cumulative IO count 00:08:40.706 4209.428 - 4234.634: 0.0060% ( 1) 00:08:40.706 4360.665 - 4385.871: 0.0417% ( 6) 00:08:40.706 4385.871 - 4411.077: 0.1073% ( 11) 00:08:40.706 4411.077 - 4436.283: 0.1551% ( 8) 00:08:40.706 4436.283 - 4461.489: 0.2326% ( 13) 00:08:40.706 4461.489 - 4486.695: 0.2684% ( 6) 00:08:40.706 4486.695 - 4511.902: 0.2803% ( 2) 00:08:40.706 4511.902 - 4537.108: 0.2922% ( 2) 00:08:40.706 4537.108 - 4562.314: 0.3042% ( 2) 00:08:40.706 4562.314 - 4587.520: 0.3161% ( 2) 00:08:40.706 4587.520 - 4612.726: 0.3220% ( 1) 00:08:40.706 4612.726 - 4637.932: 0.3340% ( 2) 00:08:40.706 4637.932 - 4663.138: 0.3459% ( 2) 00:08:40.706 4663.138 - 4688.345: 0.3578% ( 2) 00:08:40.706 4688.345 - 4713.551: 0.3698% ( 2) 00:08:40.706 4713.551 - 4738.757: 0.3757% ( 1) 00:08:40.706 4738.757 - 4763.963: 0.3817% ( 1) 00:08:40.706 5671.385 - 5696.591: 0.3876% ( 1) 00:08:40.706 5696.591 - 5721.797: 0.4115% ( 4) 00:08:40.706 5721.797 - 5747.003: 0.4354% ( 4) 00:08:40.706 5747.003 - 5772.209: 0.4652% ( 5) 00:08:40.706 5772.209 - 5797.415: 0.4950% ( 5) 00:08:40.706 5797.415 - 5822.622: 0.5725% ( 13) 00:08:40.706 5822.622 - 5847.828: 0.7634% ( 32) 00:08:40.706 5847.828 - 5873.034: 0.9184% ( 26) 00:08:40.706 5873.034 - 5898.240: 1.1987% ( 47) 00:08:40.706 5898.240 - 5923.446: 1.4313% ( 39) 00:08:40.706 5923.446 - 5948.652: 1.7712% ( 57) 00:08:40.706 5948.652 - 5973.858: 2.0575% ( 48) 00:08:40.706 5973.858 - 5999.065: 2.2662% ( 35) 00:08:40.706 5999.065 - 6024.271: 2.5465% ( 47) 00:08:40.706 6024.271 - 6049.477: 2.7552% ( 35) 00:08:40.706 6049.477 - 6074.683: 3.0654% ( 52) 00:08:40.706 6074.683 - 6099.889: 3.5186% ( 76) 00:08:40.706 6099.889 - 6125.095: 3.8108% ( 49) 00:08:40.706 6125.095 - 6150.302: 4.0732% ( 44) 00:08:40.706 6150.302 - 6175.508: 4.3893% ( 53) 00:08:40.706 6175.508 - 6200.714: 4.8247% ( 73) 00:08:40.706 6200.714 - 6225.920: 5.2183% ( 66) 00:08:40.706 6225.920 - 6251.126: 5.7848% ( 95) 00:08:40.706 6251.126 - 6276.332: 6.5064% ( 121) 00:08:40.706 6276.332 - 6301.538: 7.0372% ( 89) 00:08:40.706 6301.538 - 6326.745: 7.9616% ( 155) 00:08:40.706 6326.745 - 6351.951: 8.8084% ( 142) 00:08:40.706 6351.951 - 6377.157: 9.7448% ( 157) 00:08:40.706 6377.157 - 6402.363: 11.0210% ( 214) 00:08:40.706 6402.363 - 6427.569: 12.0885% ( 179) 00:08:40.706 6427.569 - 6452.775: 13.1918% ( 185) 00:08:40.706 6452.775 - 6503.188: 16.0186% ( 474) 00:08:40.706 6503.188 - 6553.600: 18.7500% ( 458) 00:08:40.706 6553.600 - 6604.012: 22.4177% ( 615) 00:08:40.706 6604.012 - 6654.425: 25.5367% ( 523) 00:08:40.706 6654.425 - 6704.837: 28.7214% ( 534) 00:08:40.706 6704.837 - 6755.249: 31.8583% ( 526) 00:08:40.706 6755.249 - 6805.662: 34.8163% ( 496) 00:08:40.706 6805.662 - 6856.074: 37.7505% ( 492) 00:08:40.706 6856.074 - 6906.486: 41.2750% ( 591) 00:08:40.706 6906.486 - 6956.898: 44.5790% ( 554) 00:08:40.706 6956.898 - 7007.311: 47.3104% ( 458) 00:08:40.706 7007.311 - 7057.723: 49.3022% ( 334) 00:08:40.706 7057.723 - 7108.135: 51.3597% ( 345) 00:08:40.706 7108.135 - 7158.548: 52.9998% ( 275) 00:08:40.706 7158.548 - 7208.960: 54.4609% ( 245) 00:08:40.706 7208.960 - 7259.372: 55.9160% ( 244) 00:08:40.706 7259.372 - 7309.785: 56.9955% ( 181) 00:08:40.706 7309.785 - 7360.197: 57.8721% ( 147) 00:08:40.706 7360.197 - 7410.609: 58.5878% ( 120) 00:08:40.706 7410.609 - 7461.022: 59.3869% ( 134) 00:08:40.706 7461.022 - 7511.434: 60.0072% ( 104) 00:08:40.706 7511.434 - 7561.846: 60.9554% ( 159) 00:08:40.706 7561.846 - 7612.258: 62.1601% ( 202) 00:08:40.706 7612.258 - 7662.671: 63.2812% ( 188) 00:08:40.706 7662.671 - 7713.083: 64.1877% ( 152) 00:08:40.706 7713.083 - 7763.495: 65.3507% ( 195) 00:08:40.706 7763.495 - 7813.908: 66.7641% ( 237) 00:08:40.706 7813.908 - 7864.320: 68.3325% ( 263) 00:08:40.706 7864.320 - 7914.732: 70.0799% ( 293) 00:08:40.706 7914.732 - 7965.145: 71.8273% ( 293) 00:08:40.706 7965.145 - 8015.557: 73.4256% ( 268) 00:08:40.706 8015.557 - 8065.969: 75.0358% ( 270) 00:08:40.706 8065.969 - 8116.382: 76.6997% ( 279) 00:08:40.706 8116.382 - 8166.794: 78.1489% ( 243) 00:08:40.706 8166.794 - 8217.206: 79.2164% ( 179) 00:08:40.706 8217.206 - 8267.618: 80.3018% ( 182) 00:08:40.706 8267.618 - 8318.031: 81.1725% ( 146) 00:08:40.706 8318.031 - 8368.443: 82.2340% ( 178) 00:08:40.706 8368.443 - 8418.855: 83.0988% ( 145) 00:08:40.706 8418.855 - 8469.268: 83.7309% ( 106) 00:08:40.706 8469.268 - 8519.680: 84.2438% ( 86) 00:08:40.706 8519.680 - 8570.092: 84.8044% ( 94) 00:08:40.706 8570.092 - 8620.505: 85.3352% ( 89) 00:08:40.706 8620.505 - 8670.917: 86.3907% ( 177) 00:08:40.706 8670.917 - 8721.329: 86.8201% ( 72) 00:08:40.706 8721.329 - 8771.742: 87.2197% ( 67) 00:08:40.706 8771.742 - 8822.154: 87.5954% ( 63) 00:08:40.706 8822.154 - 8872.566: 88.1262% ( 89) 00:08:40.706 8872.566 - 8922.978: 88.5138% ( 65) 00:08:40.706 8922.978 - 8973.391: 88.8538% ( 57) 00:08:40.706 8973.391 - 9023.803: 89.0267% ( 29) 00:08:40.706 9023.803 - 9074.215: 89.2056% ( 30) 00:08:40.706 9074.215 - 9124.628: 89.3965% ( 32) 00:08:40.706 9124.628 - 9175.040: 89.6052% ( 35) 00:08:40.706 9175.040 - 9225.452: 89.8497% ( 41) 00:08:40.706 9225.452 - 9275.865: 90.0584% ( 35) 00:08:40.706 9275.865 - 9326.277: 90.2552% ( 33) 00:08:40.706 9326.277 - 9376.689: 90.4222% ( 28) 00:08:40.707 9376.689 - 9427.102: 90.7025% ( 47) 00:08:40.707 9427.102 - 9477.514: 90.8576% ( 26) 00:08:40.707 9477.514 - 9527.926: 90.9769% ( 20) 00:08:40.707 9527.926 - 9578.338: 91.1260% ( 25) 00:08:40.707 9578.338 - 9628.751: 91.3884% ( 44) 00:08:40.707 9628.751 - 9679.163: 91.5494% ( 27) 00:08:40.707 9679.163 - 9729.575: 91.7760% ( 38) 00:08:40.707 9729.575 - 9779.988: 91.9907% ( 36) 00:08:40.707 9779.988 - 9830.400: 92.1935% ( 34) 00:08:40.707 9830.400 - 9880.812: 92.4559% ( 44) 00:08:40.707 9880.812 - 9931.225: 92.6825% ( 38) 00:08:40.707 9931.225 - 9981.637: 92.8912% ( 35) 00:08:40.707 9981.637 - 10032.049: 93.0344% ( 24) 00:08:40.707 10032.049 - 10082.462: 93.1417% ( 18) 00:08:40.707 10082.462 - 10132.874: 93.2490% ( 18) 00:08:40.707 10132.874 - 10183.286: 93.3564% ( 18) 00:08:40.707 10183.286 - 10233.698: 93.5293% ( 29) 00:08:40.707 10233.698 - 10284.111: 93.6427% ( 19) 00:08:40.707 10284.111 - 10334.523: 93.7440% ( 17) 00:08:40.707 10334.523 - 10384.935: 93.8872% ( 24) 00:08:40.707 10384.935 - 10435.348: 93.9885% ( 17) 00:08:40.707 10435.348 - 10485.760: 94.1376% ( 25) 00:08:40.707 10485.760 - 10536.172: 94.2927% ( 26) 00:08:40.707 10536.172 - 10586.585: 94.4358% ( 24) 00:08:40.707 10586.585 - 10636.997: 94.5670% ( 22) 00:08:40.707 10636.997 - 10687.409: 94.6684% ( 17) 00:08:40.707 10687.409 - 10737.822: 94.7459% ( 13) 00:08:40.707 10737.822 - 10788.234: 94.8175% ( 12) 00:08:40.707 10788.234 - 10838.646: 94.9785% ( 27) 00:08:40.707 10838.646 - 10889.058: 95.0620% ( 14) 00:08:40.707 10889.058 - 10939.471: 95.1336% ( 12) 00:08:40.707 10939.471 - 10989.883: 95.2230% ( 15) 00:08:40.707 10989.883 - 11040.295: 95.2886% ( 11) 00:08:40.707 11040.295 - 11090.708: 95.3662% ( 13) 00:08:40.707 11090.708 - 11141.120: 95.4437% ( 13) 00:08:40.707 11141.120 - 11191.532: 95.5153% ( 12) 00:08:40.707 11191.532 - 11241.945: 95.5868% ( 12) 00:08:40.707 11241.945 - 11292.357: 95.7061% ( 20) 00:08:40.707 11292.357 - 11342.769: 95.9148% ( 35) 00:08:40.707 11342.769 - 11393.182: 96.0341% ( 20) 00:08:40.707 11393.182 - 11443.594: 96.1415% ( 18) 00:08:40.707 11443.594 - 11494.006: 96.2130% ( 12) 00:08:40.707 11494.006 - 11544.418: 96.3084% ( 16) 00:08:40.707 11544.418 - 11594.831: 96.4039% ( 16) 00:08:40.707 11594.831 - 11645.243: 96.4933% ( 15) 00:08:40.707 11645.243 - 11695.655: 96.6007% ( 18) 00:08:40.707 11695.655 - 11746.068: 96.7736% ( 29) 00:08:40.707 11746.068 - 11796.480: 96.8452% ( 12) 00:08:40.707 11796.480 - 11846.892: 96.9227% ( 13) 00:08:40.707 11846.892 - 11897.305: 97.0360% ( 19) 00:08:40.707 11897.305 - 11947.717: 97.1493% ( 19) 00:08:40.707 11947.717 - 11998.129: 97.2984% ( 25) 00:08:40.707 11998.129 - 12048.542: 97.5191% ( 37) 00:08:40.707 12048.542 - 12098.954: 97.6741% ( 26) 00:08:40.707 12098.954 - 12149.366: 97.7994% ( 21) 00:08:40.707 12149.366 - 12199.778: 97.9187% ( 20) 00:08:40.707 12199.778 - 12250.191: 98.0141% ( 16) 00:08:40.707 12250.191 - 12300.603: 98.0677% ( 9) 00:08:40.707 12300.603 - 12351.015: 98.1274% ( 10) 00:08:40.707 12351.015 - 12401.428: 98.2526% ( 21) 00:08:40.707 12401.428 - 12451.840: 98.3182% ( 11) 00:08:40.707 12451.840 - 12502.252: 98.3421% ( 4) 00:08:40.707 12502.252 - 12552.665: 98.3659% ( 4) 00:08:40.707 12552.665 - 12603.077: 98.3958% ( 5) 00:08:40.707 12603.077 - 12653.489: 98.4196% ( 4) 00:08:40.707 12653.489 - 12703.902: 98.4494% ( 5) 00:08:40.707 12703.902 - 12754.314: 98.4614% ( 2) 00:08:40.707 12754.314 - 12804.726: 98.4733% ( 2) 00:08:40.707 12804.726 - 12855.138: 98.4971% ( 4) 00:08:40.707 12855.138 - 12905.551: 98.5270% ( 5) 00:08:40.707 12905.551 - 13006.375: 98.5747% ( 8) 00:08:40.707 13006.375 - 13107.200: 98.6760% ( 17) 00:08:40.707 13107.200 - 13208.025: 98.7297% ( 9) 00:08:40.707 13208.025 - 13308.849: 98.7536% ( 4) 00:08:40.707 13308.849 - 13409.674: 98.7774% ( 4) 00:08:40.707 13409.674 - 13510.498: 98.7953% ( 3) 00:08:40.707 13510.498 - 13611.323: 98.8192% ( 4) 00:08:40.707 13611.323 - 13712.148: 98.8430% ( 4) 00:08:40.707 13712.148 - 13812.972: 98.8550% ( 2) 00:08:40.707 13812.972 - 13913.797: 98.8609% ( 1) 00:08:40.707 14014.622 - 14115.446: 98.9146% ( 9) 00:08:40.707 14115.446 - 14216.271: 98.9563% ( 7) 00:08:40.707 14216.271 - 14317.095: 99.0458% ( 15) 00:08:40.707 14317.095 - 14417.920: 99.1054% ( 10) 00:08:40.707 14417.920 - 14518.745: 99.1293% ( 4) 00:08:40.707 14518.745 - 14619.569: 99.1472% ( 3) 00:08:40.707 14619.569 - 14720.394: 99.1710% ( 4) 00:08:40.707 14720.394 - 14821.218: 99.1889% ( 3) 00:08:40.707 14821.218 - 14922.043: 99.2068% ( 3) 00:08:40.707 14922.043 - 15022.868: 99.2307% ( 4) 00:08:40.707 15022.868 - 15123.692: 99.2366% ( 1) 00:08:40.707 17341.834 - 17442.658: 99.2724% ( 6) 00:08:40.707 17442.658 - 17543.483: 99.3321% ( 10) 00:08:40.707 17543.483 - 17644.308: 99.3798% ( 8) 00:08:40.707 17644.308 - 17745.132: 99.4334% ( 9) 00:08:40.707 17745.132 - 17845.957: 99.4513% ( 3) 00:08:40.707 17845.957 - 17946.782: 99.4752% ( 4) 00:08:40.707 17946.782 - 18047.606: 99.4990% ( 4) 00:08:40.707 18047.606 - 18148.431: 99.5229% ( 4) 00:08:40.707 18148.431 - 18249.255: 99.5527% ( 5) 00:08:40.707 18249.255 - 18350.080: 99.5825% ( 5) 00:08:40.707 18350.080 - 18450.905: 99.6124% ( 5) 00:08:40.707 18450.905 - 18551.729: 99.6183% ( 1) 00:08:40.707 22584.714 - 22685.538: 99.6541% ( 6) 00:08:40.707 22685.538 - 22786.363: 99.6839% ( 5) 00:08:40.707 22786.363 - 22887.188: 99.7137% ( 5) 00:08:40.707 22887.188 - 22988.012: 99.7495% ( 6) 00:08:40.707 22988.012 - 23088.837: 99.7793% ( 5) 00:08:40.707 23088.837 - 23189.662: 99.8092% ( 5) 00:08:40.707 23189.662 - 23290.486: 99.8390% ( 5) 00:08:40.707 23290.486 - 23391.311: 99.8748% ( 6) 00:08:40.707 23391.311 - 23492.135: 99.8986% ( 4) 00:08:40.707 23492.135 - 23592.960: 99.9344% ( 6) 00:08:40.707 23592.960 - 23693.785: 99.9642% ( 5) 00:08:40.707 23693.785 - 23794.609: 99.9881% ( 4) 00:08:40.707 23794.609 - 23895.434: 100.0000% ( 2) 00:08:40.707 00:08:40.707 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:40.707 ============================================================================== 00:08:40.707 Range in us Cumulative IO count 00:08:40.707 3831.335 - 3856.542: 0.0060% ( 1) 00:08:40.707 3932.160 - 3957.366: 0.0119% ( 1) 00:08:40.707 3957.366 - 3982.572: 0.0239% ( 2) 00:08:40.707 3982.572 - 4007.778: 0.0358% ( 2) 00:08:40.707 4007.778 - 4032.985: 0.0477% ( 2) 00:08:40.707 4032.985 - 4058.191: 0.0596% ( 2) 00:08:40.707 4058.191 - 4083.397: 0.0716% ( 2) 00:08:40.707 4083.397 - 4108.603: 0.0835% ( 2) 00:08:40.707 4108.603 - 4133.809: 0.1312% ( 8) 00:08:40.707 4133.809 - 4159.015: 0.2087% ( 13) 00:08:40.707 4159.015 - 4184.222: 0.2564% ( 8) 00:08:40.707 4184.222 - 4209.428: 0.3101% ( 9) 00:08:40.707 4209.428 - 4234.634: 0.3220% ( 2) 00:08:40.707 4234.634 - 4259.840: 0.3399% ( 3) 00:08:40.707 4259.840 - 4285.046: 0.3459% ( 1) 00:08:40.707 4285.046 - 4310.252: 0.3638% ( 3) 00:08:40.707 4310.252 - 4335.458: 0.3757% ( 2) 00:08:40.707 4335.458 - 4360.665: 0.3817% ( 1) 00:08:40.707 5394.117 - 5419.323: 0.3876% ( 1) 00:08:40.707 5494.942 - 5520.148: 0.3936% ( 1) 00:08:40.707 5545.354 - 5570.560: 0.3996% ( 1) 00:08:40.707 5570.560 - 5595.766: 0.4175% ( 3) 00:08:40.707 5595.766 - 5620.972: 0.4294% ( 2) 00:08:40.707 5620.972 - 5646.178: 0.4413% ( 2) 00:08:40.707 5646.178 - 5671.385: 0.4890% ( 8) 00:08:40.707 5671.385 - 5696.591: 0.5606% ( 12) 00:08:40.707 5696.591 - 5721.797: 0.6322% ( 12) 00:08:40.707 5721.797 - 5747.003: 0.7156% ( 14) 00:08:40.707 5747.003 - 5772.209: 0.7812% ( 11) 00:08:40.707 5772.209 - 5797.415: 0.8469% ( 11) 00:08:40.707 5797.415 - 5822.622: 0.9184% ( 12) 00:08:40.707 5822.622 - 5847.828: 1.1093% ( 32) 00:08:40.707 5847.828 - 5873.034: 1.2285% ( 20) 00:08:40.707 5873.034 - 5898.240: 1.3597% ( 22) 00:08:40.707 5898.240 - 5923.446: 1.4969% ( 23) 00:08:40.707 5923.446 - 5948.652: 1.6818% ( 31) 00:08:40.707 5948.652 - 5973.858: 2.0754% ( 66) 00:08:40.707 5973.858 - 5999.065: 2.4034% ( 55) 00:08:40.707 5999.065 - 6024.271: 2.6837% ( 47) 00:08:40.707 6024.271 - 6049.477: 3.0713% ( 65) 00:08:40.707 6049.477 - 6074.683: 3.3337% ( 44) 00:08:40.707 6074.683 - 6099.889: 3.8228% ( 82) 00:08:40.707 6099.889 - 6125.095: 4.2641% ( 74) 00:08:40.707 6125.095 - 6150.302: 4.5265% ( 44) 00:08:40.707 6150.302 - 6175.508: 4.8604% ( 56) 00:08:40.707 6175.508 - 6200.714: 5.1050% ( 41) 00:08:40.707 6200.714 - 6225.920: 5.5582% ( 76) 00:08:40.707 6225.920 - 6251.126: 5.8922% ( 56) 00:08:40.707 6251.126 - 6276.332: 6.3454% ( 76) 00:08:40.707 6276.332 - 6301.538: 6.8941% ( 92) 00:08:40.707 6301.538 - 6326.745: 7.3831% ( 82) 00:08:40.707 6326.745 - 6351.951: 8.2300% ( 142) 00:08:40.707 6351.951 - 6377.157: 9.1543% ( 155) 00:08:40.707 6377.157 - 6402.363: 10.1741% ( 171) 00:08:40.707 6402.363 - 6427.569: 11.2238% ( 176) 00:08:40.707 6427.569 - 6452.775: 12.0945% ( 146) 00:08:40.707 6452.775 - 6503.188: 14.2653% ( 364) 00:08:40.707 6503.188 - 6553.600: 17.7123% ( 578) 00:08:40.707 6553.600 - 6604.012: 21.1951% ( 584) 00:08:40.707 6604.012 - 6654.425: 25.0000% ( 638) 00:08:40.707 6654.425 - 6704.837: 28.6856% ( 618) 00:08:40.707 6704.837 - 6755.249: 32.2102% ( 591) 00:08:40.707 6755.249 - 6805.662: 35.1264% ( 489) 00:08:40.707 6805.662 - 6856.074: 38.5735% ( 578) 00:08:40.707 6856.074 - 6906.486: 41.6567% ( 517) 00:08:40.707 6906.486 - 6956.898: 44.7459% ( 518) 00:08:40.707 6956.898 - 7007.311: 47.1672% ( 406) 00:08:40.707 7007.311 - 7057.723: 49.3917% ( 373) 00:08:40.707 7057.723 - 7108.135: 51.7056% ( 388) 00:08:40.707 7108.135 - 7158.548: 53.1489% ( 242) 00:08:40.707 7158.548 - 7208.960: 54.6398% ( 250) 00:08:40.707 7208.960 - 7259.372: 56.1904% ( 260) 00:08:40.707 7259.372 - 7309.785: 57.3294% ( 191) 00:08:40.707 7309.785 - 7360.197: 58.2598% ( 156) 00:08:40.707 7360.197 - 7410.609: 59.0530% ( 133) 00:08:40.707 7410.609 - 7461.022: 59.7746% ( 121) 00:08:40.707 7461.022 - 7511.434: 60.5081% ( 123) 00:08:40.708 7511.434 - 7561.846: 61.1820% ( 113) 00:08:40.708 7561.846 - 7612.258: 62.1302% ( 159) 00:08:40.708 7612.258 - 7662.671: 63.1858% ( 177) 00:08:40.708 7662.671 - 7713.083: 64.1698% ( 165) 00:08:40.708 7713.083 - 7763.495: 65.1777% ( 169) 00:08:40.708 7763.495 - 7813.908: 66.6865% ( 253) 00:08:40.708 7813.908 - 7864.320: 67.8733% ( 199) 00:08:40.708 7864.320 - 7914.732: 69.4776% ( 269) 00:08:40.708 7914.732 - 7965.145: 71.0759% ( 268) 00:08:40.708 7965.145 - 8015.557: 72.8829% ( 303) 00:08:40.708 8015.557 - 8065.969: 74.6243% ( 292) 00:08:40.708 8065.969 - 8116.382: 76.3359% ( 287) 00:08:40.708 8116.382 - 8166.794: 78.0057% ( 280) 00:08:40.708 8166.794 - 8217.206: 79.5205% ( 254) 00:08:40.708 8217.206 - 8267.618: 80.8683% ( 226) 00:08:40.708 8267.618 - 8318.031: 82.0193% ( 193) 00:08:40.708 8318.031 - 8368.443: 82.8304% ( 136) 00:08:40.708 8368.443 - 8418.855: 83.4447% ( 103) 00:08:40.708 8418.855 - 8469.268: 84.0470% ( 101) 00:08:40.708 8469.268 - 8519.680: 84.6076% ( 94) 00:08:40.708 8519.680 - 8570.092: 84.9952% ( 65) 00:08:40.708 8570.092 - 8620.505: 85.6691% ( 113) 00:08:40.708 8620.505 - 8670.917: 86.0091% ( 57) 00:08:40.708 8670.917 - 8721.329: 86.3311% ( 54) 00:08:40.708 8721.329 - 8771.742: 86.6472% ( 53) 00:08:40.708 8771.742 - 8822.154: 86.9812% ( 56) 00:08:40.708 8822.154 - 8872.566: 87.4463% ( 78) 00:08:40.708 8872.566 - 8922.978: 88.0546% ( 102) 00:08:40.708 8922.978 - 8973.391: 88.4721% ( 70) 00:08:40.708 8973.391 - 9023.803: 88.8240% ( 59) 00:08:40.708 9023.803 - 9074.215: 89.3130% ( 82) 00:08:40.708 9074.215 - 9124.628: 89.5813% ( 45) 00:08:40.708 9124.628 - 9175.040: 89.7901% ( 35) 00:08:40.708 9175.040 - 9225.452: 89.9928% ( 34) 00:08:40.708 9225.452 - 9275.865: 90.1658% ( 29) 00:08:40.708 9275.865 - 9326.277: 90.3328% ( 28) 00:08:40.708 9326.277 - 9376.689: 90.4938% ( 27) 00:08:40.708 9376.689 - 9427.102: 90.6906% ( 33) 00:08:40.708 9427.102 - 9477.514: 90.9470% ( 43) 00:08:40.708 9477.514 - 9527.926: 91.2154% ( 45) 00:08:40.708 9527.926 - 9578.338: 91.4540% ( 40) 00:08:40.708 9578.338 - 9628.751: 91.6150% ( 27) 00:08:40.708 9628.751 - 9679.163: 91.7700% ( 26) 00:08:40.708 9679.163 - 9729.575: 92.1040% ( 56) 00:08:40.708 9729.575 - 9779.988: 92.3426% ( 40) 00:08:40.708 9779.988 - 9830.400: 92.5274% ( 31) 00:08:40.708 9830.400 - 9880.812: 92.7421% ( 36) 00:08:40.708 9880.812 - 9931.225: 93.1000% ( 60) 00:08:40.708 9931.225 - 9981.637: 93.2133% ( 19) 00:08:40.708 9981.637 - 10032.049: 93.3385% ( 21) 00:08:40.708 10032.049 - 10082.462: 93.4578% ( 20) 00:08:40.708 10082.462 - 10132.874: 93.6128% ( 26) 00:08:40.708 10132.874 - 10183.286: 93.7560% ( 24) 00:08:40.708 10183.286 - 10233.698: 93.8514% ( 16) 00:08:40.708 10233.698 - 10284.111: 93.9528% ( 17) 00:08:40.708 10284.111 - 10334.523: 94.1376% ( 31) 00:08:40.708 10334.523 - 10384.935: 94.2211% ( 14) 00:08:40.708 10384.935 - 10435.348: 94.3106% ( 15) 00:08:40.708 10435.348 - 10485.760: 94.4120% ( 17) 00:08:40.708 10485.760 - 10536.172: 94.5253% ( 19) 00:08:40.708 10536.172 - 10586.585: 94.6147% ( 15) 00:08:40.708 10586.585 - 10636.997: 94.6744% ( 10) 00:08:40.708 10636.997 - 10687.409: 94.7459% ( 12) 00:08:40.708 10687.409 - 10737.822: 94.9070% ( 27) 00:08:40.708 10737.822 - 10788.234: 95.0083% ( 17) 00:08:40.708 10788.234 - 10838.646: 95.1992% ( 32) 00:08:40.708 10838.646 - 10889.058: 95.2946% ( 16) 00:08:40.708 10889.058 - 10939.471: 95.3364% ( 7) 00:08:40.708 10939.471 - 10989.883: 95.3662% ( 5) 00:08:40.708 10989.883 - 11040.295: 95.4079% ( 7) 00:08:40.708 11040.295 - 11090.708: 95.4198% ( 2) 00:08:40.708 11141.120 - 11191.532: 95.4556% ( 6) 00:08:40.708 11191.532 - 11241.945: 95.5033% ( 8) 00:08:40.708 11241.945 - 11292.357: 95.5332% ( 5) 00:08:40.708 11292.357 - 11342.769: 95.6703% ( 23) 00:08:40.708 11342.769 - 11393.182: 95.7777% ( 18) 00:08:40.708 11393.182 - 11443.594: 95.8612% ( 14) 00:08:40.708 11443.594 - 11494.006: 95.9506% ( 15) 00:08:40.708 11494.006 - 11544.418: 96.0759% ( 21) 00:08:40.708 11544.418 - 11594.831: 96.1176% ( 7) 00:08:40.708 11594.831 - 11645.243: 96.1653% ( 8) 00:08:40.708 11645.243 - 11695.655: 96.2250% ( 10) 00:08:40.708 11695.655 - 11746.068: 96.2906% ( 11) 00:08:40.708 11746.068 - 11796.480: 96.3502% ( 10) 00:08:40.708 11796.480 - 11846.892: 96.4039% ( 9) 00:08:40.708 11846.892 - 11897.305: 96.4993% ( 16) 00:08:40.708 11897.305 - 11947.717: 96.6186% ( 20) 00:08:40.708 11947.717 - 11998.129: 96.7080% ( 15) 00:08:40.708 11998.129 - 12048.542: 96.8213% ( 19) 00:08:40.708 12048.542 - 12098.954: 96.9704% ( 25) 00:08:40.708 12098.954 - 12149.366: 97.1255% ( 26) 00:08:40.708 12149.366 - 12199.778: 97.2746% ( 25) 00:08:40.708 12199.778 - 12250.191: 97.5787% ( 51) 00:08:40.708 12250.191 - 12300.603: 97.7278% ( 25) 00:08:40.708 12300.603 - 12351.015: 97.8948% ( 28) 00:08:40.708 12351.015 - 12401.428: 97.9962% ( 17) 00:08:40.708 12401.428 - 12451.840: 98.0856% ( 15) 00:08:40.708 12451.840 - 12502.252: 98.1691% ( 14) 00:08:40.708 12502.252 - 12552.665: 98.2765% ( 18) 00:08:40.708 12552.665 - 12603.077: 98.3600% ( 14) 00:08:40.708 12603.077 - 12653.489: 98.4375% ( 13) 00:08:40.708 12653.489 - 12703.902: 98.5150% ( 13) 00:08:40.708 12703.902 - 12754.314: 98.5985% ( 14) 00:08:40.708 12754.314 - 12804.726: 98.6939% ( 16) 00:08:40.708 12804.726 - 12855.138: 98.7476% ( 9) 00:08:40.708 12855.138 - 12905.551: 98.7953% ( 8) 00:08:40.708 12905.551 - 13006.375: 98.8371% ( 7) 00:08:40.708 13006.375 - 13107.200: 98.8550% ( 3) 00:08:40.708 13208.025 - 13308.849: 98.8729% ( 3) 00:08:40.708 13308.849 - 13409.674: 98.9265% ( 9) 00:08:40.708 13409.674 - 13510.498: 98.9742% ( 8) 00:08:40.708 13510.498 - 13611.323: 99.0697% ( 16) 00:08:40.708 13611.323 - 13712.148: 99.1174% ( 8) 00:08:40.708 13712.148 - 13812.972: 99.1412% ( 4) 00:08:40.708 13812.972 - 13913.797: 99.1591% ( 3) 00:08:40.708 13913.797 - 14014.622: 99.1830% ( 4) 00:08:40.708 14014.622 - 14115.446: 99.2068% ( 4) 00:08:40.708 14115.446 - 14216.271: 99.2307% ( 4) 00:08:40.708 14216.271 - 14317.095: 99.2366% ( 1) 00:08:40.708 17039.360 - 17140.185: 99.2486% ( 2) 00:08:40.708 17140.185 - 17241.009: 99.3022% ( 9) 00:08:40.708 17241.009 - 17341.834: 99.3559% ( 9) 00:08:40.708 17341.834 - 17442.658: 99.4036% ( 8) 00:08:40.708 17442.658 - 17543.483: 99.4633% ( 10) 00:08:40.708 17543.483 - 17644.308: 99.5050% ( 7) 00:08:40.708 17644.308 - 17745.132: 99.5289% ( 4) 00:08:40.708 17745.132 - 17845.957: 99.5527% ( 4) 00:08:40.708 17845.957 - 17946.782: 99.5766% ( 4) 00:08:40.708 17946.782 - 18047.606: 99.5945% ( 3) 00:08:40.708 18047.606 - 18148.431: 99.6183% ( 4) 00:08:40.708 21677.292 - 21778.117: 99.6422% ( 4) 00:08:40.708 22181.415 - 22282.240: 99.6541% ( 2) 00:08:40.708 22282.240 - 22383.065: 99.6780% ( 4) 00:08:40.708 22383.065 - 22483.889: 99.7078% ( 5) 00:08:40.708 22483.889 - 22584.714: 99.7257% ( 3) 00:08:40.708 22584.714 - 22685.538: 99.7674% ( 7) 00:08:40.708 22685.538 - 22786.363: 99.7972% ( 5) 00:08:40.708 22786.363 - 22887.188: 99.8449% ( 8) 00:08:40.708 22887.188 - 22988.012: 99.8867% ( 7) 00:08:40.708 22988.012 - 23088.837: 99.9046% ( 3) 00:08:40.708 23088.837 - 23189.662: 99.9344% ( 5) 00:08:40.708 23189.662 - 23290.486: 99.9642% ( 5) 00:08:40.708 23290.486 - 23391.311: 100.0000% ( 6) 00:08:40.708 00:08:40.708 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:40.708 ============================================================================== 00:08:40.708 Range in us Cumulative IO count 00:08:40.708 3680.098 - 3705.305: 0.0060% ( 1) 00:08:40.708 3730.511 - 3755.717: 0.0119% ( 1) 00:08:40.708 3755.717 - 3780.923: 0.0179% ( 1) 00:08:40.708 3780.923 - 3806.129: 0.0298% ( 2) 00:08:40.708 3806.129 - 3831.335: 0.0775% ( 8) 00:08:40.708 3831.335 - 3856.542: 0.1729% ( 16) 00:08:40.708 3856.542 - 3881.748: 0.2385% ( 11) 00:08:40.708 3881.748 - 3906.954: 0.2863% ( 8) 00:08:40.708 3906.954 - 3932.160: 0.3042% ( 3) 00:08:40.708 3932.160 - 3957.366: 0.3161% ( 2) 00:08:40.708 3957.366 - 3982.572: 0.3280% ( 2) 00:08:40.708 3982.572 - 4007.778: 0.3399% ( 2) 00:08:40.708 4007.778 - 4032.985: 0.3519% ( 2) 00:08:40.708 4032.985 - 4058.191: 0.3698% ( 3) 00:08:40.708 4058.191 - 4083.397: 0.3817% ( 2) 00:08:40.708 5293.292 - 5318.498: 0.3876% ( 1) 00:08:40.708 5419.323 - 5444.529: 0.4354% ( 8) 00:08:40.708 5444.529 - 5469.735: 0.5129% ( 13) 00:08:40.708 5469.735 - 5494.942: 0.5785% ( 11) 00:08:40.708 5494.942 - 5520.148: 0.6381% ( 10) 00:08:40.708 5520.148 - 5545.354: 0.6858% ( 8) 00:08:40.708 5545.354 - 5570.560: 0.7037% ( 3) 00:08:40.708 5570.560 - 5595.766: 0.7156% ( 2) 00:08:40.708 5595.766 - 5620.972: 0.7276% ( 2) 00:08:40.708 5620.972 - 5646.178: 0.7395% ( 2) 00:08:40.708 5646.178 - 5671.385: 0.7634% ( 4) 00:08:40.708 5671.385 - 5696.591: 0.7872% ( 4) 00:08:40.708 5696.591 - 5721.797: 0.8469% ( 10) 00:08:40.708 5721.797 - 5747.003: 0.9065% ( 10) 00:08:40.708 5747.003 - 5772.209: 0.9482% ( 7) 00:08:40.708 5772.209 - 5797.415: 1.0019% ( 9) 00:08:40.708 5797.415 - 5822.622: 1.0735% ( 12) 00:08:40.708 5822.622 - 5847.828: 1.2166% ( 24) 00:08:40.708 5847.828 - 5873.034: 1.3478% ( 22) 00:08:40.708 5873.034 - 5898.240: 1.4432% ( 16) 00:08:40.708 5898.240 - 5923.446: 1.5446% ( 17) 00:08:40.708 5923.446 - 5948.652: 1.6997% ( 26) 00:08:40.708 5948.652 - 5973.858: 1.8965% ( 33) 00:08:40.708 5973.858 - 5999.065: 2.1410% ( 41) 00:08:40.708 5999.065 - 6024.271: 2.5167% ( 63) 00:08:40.708 6024.271 - 6049.477: 2.8686% ( 59) 00:08:40.708 6049.477 - 6074.683: 3.3278% ( 77) 00:08:40.708 6074.683 - 6099.889: 3.9897% ( 111) 00:08:40.708 6099.889 - 6125.095: 4.4251% ( 73) 00:08:40.708 6125.095 - 6150.302: 4.9917% ( 95) 00:08:40.708 6150.302 - 6175.508: 5.3375% ( 58) 00:08:40.708 6175.508 - 6200.714: 5.7550% ( 70) 00:08:40.708 6200.714 - 6225.920: 6.2858% ( 89) 00:08:40.708 6225.920 - 6251.126: 6.6257% ( 57) 00:08:40.708 6251.126 - 6276.332: 7.0253% ( 67) 00:08:40.708 6276.332 - 6301.538: 7.4487% ( 71) 00:08:40.708 6301.538 - 6326.745: 8.0928% ( 108) 00:08:40.708 6326.745 - 6351.951: 8.9754% ( 148) 00:08:40.708 6351.951 - 6377.157: 9.7805% ( 135) 00:08:40.708 6377.157 - 6402.363: 10.5021% ( 121) 00:08:40.708 6402.363 - 6427.569: 11.3550% ( 143) 00:08:40.708 6427.569 - 6452.775: 12.2674% ( 153) 00:08:40.708 6452.775 - 6503.188: 14.6291% ( 396) 00:08:40.708 6503.188 - 6553.600: 17.0384% ( 404) 00:08:40.708 6553.600 - 6604.012: 20.6226% ( 601) 00:08:40.708 6604.012 - 6654.425: 25.5308% ( 823) 00:08:40.708 6654.425 - 6704.837: 29.2223% ( 619) 00:08:40.708 6704.837 - 6755.249: 32.1326% ( 488) 00:08:40.708 6755.249 - 6805.662: 35.3113% ( 533) 00:08:40.708 6805.662 - 6856.074: 38.8836% ( 599) 00:08:40.708 6856.074 - 6906.486: 41.4182% ( 425) 00:08:40.708 6906.486 - 6956.898: 44.5134% ( 519) 00:08:40.708 6956.898 - 7007.311: 47.3640% ( 478) 00:08:40.708 7007.311 - 7057.723: 49.6362% ( 381) 00:08:40.708 7057.723 - 7108.135: 51.7235% ( 350) 00:08:40.708 7108.135 - 7158.548: 53.6260% ( 319) 00:08:40.708 7158.548 - 7208.960: 55.0453% ( 238) 00:08:40.708 7208.960 - 7259.372: 56.2858% ( 208) 00:08:40.708 7259.372 - 7309.785: 57.0730% ( 132) 00:08:40.708 7309.785 - 7360.197: 57.7290% ( 110) 00:08:40.708 7360.197 - 7410.609: 58.5818% ( 143) 00:08:40.708 7410.609 - 7461.022: 59.4704% ( 149) 00:08:40.708 7461.022 - 7511.434: 60.1801% ( 119) 00:08:40.708 7511.434 - 7561.846: 60.7824% ( 101) 00:08:40.708 7561.846 - 7612.258: 61.4862% ( 118) 00:08:40.708 7612.258 - 7662.671: 62.3151% ( 139) 00:08:40.708 7662.671 - 7713.083: 63.3707% ( 177) 00:08:40.708 7713.083 - 7763.495: 64.4442% ( 180) 00:08:40.708 7763.495 - 7813.908: 65.9888% ( 259) 00:08:40.708 7813.908 - 7864.320: 67.7779% ( 300) 00:08:40.708 7864.320 - 7914.732: 69.5909% ( 304) 00:08:40.708 7914.732 - 7965.145: 71.3740% ( 299) 00:08:40.708 7965.145 - 8015.557: 73.1035% ( 290) 00:08:40.708 8015.557 - 8065.969: 75.0537% ( 327) 00:08:40.708 8065.969 - 8116.382: 76.6997% ( 276) 00:08:40.708 8116.382 - 8166.794: 78.4351% ( 291) 00:08:40.708 8166.794 - 8217.206: 79.8903% ( 244) 00:08:40.708 8217.206 - 8267.618: 80.9637% ( 180) 00:08:40.708 8267.618 - 8318.031: 81.9656% ( 168) 00:08:40.708 8318.031 - 8368.443: 83.0332% ( 179) 00:08:40.708 8368.443 - 8418.855: 83.8681% ( 140) 00:08:40.708 8418.855 - 8469.268: 84.6672% ( 134) 00:08:40.708 8469.268 - 8519.680: 85.4187% ( 126) 00:08:40.708 8519.680 - 8570.092: 85.9077% ( 82) 00:08:40.708 8570.092 - 8620.505: 86.4146% ( 85) 00:08:40.708 8620.505 - 8670.917: 87.0348% ( 104) 00:08:40.708 8670.917 - 8721.329: 87.5537% ( 87) 00:08:40.708 8721.329 - 8771.742: 87.9592% ( 68) 00:08:40.708 8771.742 - 8822.154: 88.4005% ( 74) 00:08:40.708 8822.154 - 8872.566: 88.7643% ( 61) 00:08:40.708 8872.566 - 8922.978: 88.9730% ( 35) 00:08:40.708 8922.978 - 8973.391: 89.1520% ( 30) 00:08:40.708 8973.391 - 9023.803: 89.2951% ( 24) 00:08:40.708 9023.803 - 9074.215: 89.4144% ( 20) 00:08:40.708 9074.215 - 9124.628: 89.5277% ( 19) 00:08:40.708 9124.628 - 9175.040: 89.6112% ( 14) 00:08:40.708 9175.040 - 9225.452: 89.6827% ( 12) 00:08:40.708 9225.452 - 9275.865: 89.7483% ( 11) 00:08:40.708 9275.865 - 9326.277: 89.8497% ( 17) 00:08:40.708 9326.277 - 9376.689: 89.9690% ( 20) 00:08:40.708 9376.689 - 9427.102: 90.1718% ( 34) 00:08:40.708 9427.102 - 9477.514: 90.3745% ( 34) 00:08:40.708 9477.514 - 9527.926: 90.5952% ( 37) 00:08:40.708 9527.926 - 9578.338: 90.8158% ( 37) 00:08:40.708 9578.338 - 9628.751: 91.0484% ( 39) 00:08:40.708 9628.751 - 9679.163: 91.4361% ( 65) 00:08:40.708 9679.163 - 9729.575: 91.7104% ( 46) 00:08:40.708 9729.575 - 9779.988: 92.0623% ( 59) 00:08:40.708 9779.988 - 9830.400: 92.3247% ( 44) 00:08:40.708 9830.400 - 9880.812: 92.6288% ( 51) 00:08:40.708 9880.812 - 9931.225: 92.9270% ( 50) 00:08:40.708 9931.225 - 9981.637: 93.2431% ( 53) 00:08:40.708 9981.637 - 10032.049: 93.4816% ( 40) 00:08:40.709 10032.049 - 10082.462: 93.6546% ( 29) 00:08:40.709 10082.462 - 10132.874: 93.7917% ( 23) 00:08:40.709 10132.874 - 10183.286: 93.8991% ( 18) 00:08:40.709 10183.286 - 10233.698: 93.9647% ( 11) 00:08:40.709 10233.698 - 10284.111: 94.1138% ( 25) 00:08:40.709 10284.111 - 10334.523: 94.2271% ( 19) 00:08:40.709 10334.523 - 10384.935: 94.3106% ( 14) 00:08:40.709 10384.935 - 10435.348: 94.3941% ( 14) 00:08:40.709 10435.348 - 10485.760: 94.4776% ( 14) 00:08:40.709 10485.760 - 10536.172: 94.5670% ( 15) 00:08:40.709 10536.172 - 10586.585: 94.6923% ( 21) 00:08:40.709 10586.585 - 10636.997: 94.7638% ( 12) 00:08:40.709 10636.997 - 10687.409: 94.8354% ( 12) 00:08:40.709 10687.409 - 10737.822: 94.8950% ( 10) 00:08:40.709 10737.822 - 10788.234: 94.9666% ( 12) 00:08:40.709 10788.234 - 10838.646: 95.0501% ( 14) 00:08:40.709 10838.646 - 10889.058: 95.1396% ( 15) 00:08:40.709 10889.058 - 10939.471: 95.2409% ( 17) 00:08:40.709 10939.471 - 10989.883: 95.4020% ( 27) 00:08:40.709 10989.883 - 11040.295: 95.5988% ( 33) 00:08:40.709 11040.295 - 11090.708: 95.7479% ( 25) 00:08:40.709 11090.708 - 11141.120: 95.8373% ( 15) 00:08:40.709 11141.120 - 11191.532: 95.9029% ( 11) 00:08:40.709 11191.532 - 11241.945: 95.9566% ( 9) 00:08:40.709 11241.945 - 11292.357: 96.0043% ( 8) 00:08:40.709 11292.357 - 11342.769: 96.0520% ( 8) 00:08:40.709 11342.769 - 11393.182: 96.0878% ( 6) 00:08:40.709 11393.182 - 11443.594: 96.1236% ( 6) 00:08:40.709 11443.594 - 11494.006: 96.1594% ( 6) 00:08:40.709 11494.006 - 11544.418: 96.1951% ( 6) 00:08:40.709 11544.418 - 11594.831: 96.2309% ( 6) 00:08:40.709 11594.831 - 11645.243: 96.2607% ( 5) 00:08:40.709 11645.243 - 11695.655: 96.3144% ( 9) 00:08:40.709 11695.655 - 11746.068: 96.3740% ( 10) 00:08:40.709 11746.068 - 11796.480: 96.4277% ( 9) 00:08:40.709 11796.480 - 11846.892: 96.4814% ( 9) 00:08:40.709 11846.892 - 11897.305: 96.5291% ( 8) 00:08:40.709 11897.305 - 11947.717: 96.5708% ( 7) 00:08:40.709 11947.717 - 11998.129: 96.6305% ( 10) 00:08:40.709 11998.129 - 12048.542: 96.7140% ( 14) 00:08:40.709 12048.542 - 12098.954: 96.7617% ( 8) 00:08:40.709 12098.954 - 12149.366: 96.7975% ( 6) 00:08:40.709 12149.366 - 12199.778: 96.8273% ( 5) 00:08:40.709 12199.778 - 12250.191: 96.8690% ( 7) 00:08:40.709 12250.191 - 12300.603: 96.8929% ( 4) 00:08:40.709 12300.603 - 12351.015: 96.9525% ( 10) 00:08:40.709 12351.015 - 12401.428: 97.0241% ( 12) 00:08:40.709 12401.428 - 12451.840: 97.0897% ( 11) 00:08:40.709 12451.840 - 12502.252: 97.1553% ( 11) 00:08:40.709 12502.252 - 12552.665: 97.2388% ( 14) 00:08:40.709 12552.665 - 12603.077: 97.4356% ( 33) 00:08:40.709 12603.077 - 12653.489: 97.5966% ( 27) 00:08:40.709 12653.489 - 12703.902: 97.6801% ( 14) 00:08:40.709 12703.902 - 12754.314: 97.8053% ( 21) 00:08:40.709 12754.314 - 12804.726: 98.0021% ( 33) 00:08:40.709 12804.726 - 12855.138: 98.2467% ( 41) 00:08:40.709 12855.138 - 12905.551: 98.4375% ( 32) 00:08:40.709 12905.551 - 13006.375: 98.6522% ( 36) 00:08:40.709 13006.375 - 13107.200: 98.7834% ( 22) 00:08:40.709 13107.200 - 13208.025: 98.8788% ( 16) 00:08:40.709 13208.025 - 13308.849: 99.0041% ( 21) 00:08:40.709 13308.849 - 13409.674: 99.1114% ( 18) 00:08:40.709 13409.674 - 13510.498: 99.1889% ( 13) 00:08:40.709 13510.498 - 13611.323: 99.2188% ( 5) 00:08:40.709 13611.323 - 13712.148: 99.2366% ( 3) 00:08:40.709 16938.535 - 17039.360: 99.2486% ( 2) 00:08:40.709 17039.360 - 17140.185: 99.2844% ( 6) 00:08:40.709 17140.185 - 17241.009: 99.3201% ( 6) 00:08:40.709 17241.009 - 17341.834: 99.3559% ( 6) 00:08:40.709 17341.834 - 17442.658: 99.3977% ( 7) 00:08:40.709 17442.658 - 17543.483: 99.4334% ( 6) 00:08:40.709 17543.483 - 17644.308: 99.4692% ( 6) 00:08:40.709 17644.308 - 17745.132: 99.4931% ( 4) 00:08:40.709 17745.132 - 17845.957: 99.5229% ( 5) 00:08:40.709 17845.957 - 17946.782: 99.5527% ( 5) 00:08:40.709 17946.782 - 18047.606: 99.5825% ( 5) 00:08:40.709 18047.606 - 18148.431: 99.6124% ( 5) 00:08:40.709 18148.431 - 18249.255: 99.6183% ( 1) 00:08:40.709 21273.994 - 21374.818: 99.6243% ( 1) 00:08:40.709 21374.818 - 21475.643: 99.6601% ( 6) 00:08:40.709 21475.643 - 21576.468: 99.7018% ( 7) 00:08:40.709 21576.468 - 21677.292: 99.7436% ( 7) 00:08:40.709 21677.292 - 21778.117: 99.8211% ( 13) 00:08:40.709 21778.117 - 21878.942: 99.8628% ( 7) 00:08:40.709 21878.942 - 21979.766: 99.8986% ( 6) 00:08:40.709 21979.766 - 22080.591: 99.9046% ( 1) 00:08:40.709 22080.591 - 22181.415: 99.9105% ( 1) 00:08:40.709 22181.415 - 22282.240: 99.9463% ( 6) 00:08:40.709 22282.240 - 22383.065: 99.9821% ( 6) 00:08:40.709 22383.065 - 22483.889: 100.0000% ( 3) 00:08:40.709 00:08:40.709 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:40.709 ============================================================================== 00:08:40.709 Range in us Cumulative IO count 00:08:40.709 3276.800 - 3302.006: 0.0060% ( 1) 00:08:40.709 3528.862 - 3554.068: 0.0298% ( 4) 00:08:40.709 3554.068 - 3579.274: 0.0835% ( 9) 00:08:40.709 3579.274 - 3604.480: 0.1372% ( 9) 00:08:40.709 3604.480 - 3629.686: 0.2087% ( 12) 00:08:40.709 3629.686 - 3654.892: 0.2743% ( 11) 00:08:40.709 3654.892 - 3680.098: 0.2863% ( 2) 00:08:40.709 3680.098 - 3705.305: 0.2982% ( 2) 00:08:40.709 3705.305 - 3730.511: 0.3101% ( 2) 00:08:40.709 3730.511 - 3755.717: 0.3220% ( 2) 00:08:40.709 3755.717 - 3780.923: 0.3340% ( 2) 00:08:40.709 3780.923 - 3806.129: 0.3459% ( 2) 00:08:40.709 3806.129 - 3831.335: 0.3578% ( 2) 00:08:40.709 3831.335 - 3856.542: 0.3698% ( 2) 00:08:40.709 3856.542 - 3881.748: 0.3817% ( 2) 00:08:40.709 5167.262 - 5192.468: 0.3996% ( 3) 00:08:40.709 5192.468 - 5217.674: 0.4473% ( 8) 00:08:40.709 5217.674 - 5242.880: 0.5487% ( 17) 00:08:40.709 5242.880 - 5268.086: 0.6441% ( 16) 00:08:40.709 5268.086 - 5293.292: 0.6620% ( 3) 00:08:40.709 5293.292 - 5318.498: 0.6739% ( 2) 00:08:40.709 5318.498 - 5343.705: 0.6858% ( 2) 00:08:40.709 5343.705 - 5368.911: 0.6978% ( 2) 00:08:40.709 5368.911 - 5394.117: 0.7097% ( 2) 00:08:40.709 5394.117 - 5419.323: 0.7216% ( 2) 00:08:40.709 5419.323 - 5444.529: 0.7276% ( 1) 00:08:40.709 5444.529 - 5469.735: 0.7455% ( 3) 00:08:40.709 5469.735 - 5494.942: 0.7574% ( 2) 00:08:40.709 5494.942 - 5520.148: 0.7693% ( 2) 00:08:40.709 5620.972 - 5646.178: 0.7753% ( 1) 00:08:40.709 5646.178 - 5671.385: 0.7872% ( 2) 00:08:40.709 5671.385 - 5696.591: 0.7991% ( 2) 00:08:40.709 5696.591 - 5721.797: 0.8290% ( 5) 00:08:40.709 5721.797 - 5747.003: 0.8767% ( 8) 00:08:40.709 5747.003 - 5772.209: 0.9125% ( 6) 00:08:40.709 5772.209 - 5797.415: 1.1271% ( 36) 00:08:40.709 5797.415 - 5822.622: 1.1868% ( 10) 00:08:40.709 5822.622 - 5847.828: 1.2464% ( 10) 00:08:40.709 5847.828 - 5873.034: 1.4253% ( 30) 00:08:40.709 5873.034 - 5898.240: 1.4909% ( 11) 00:08:40.709 5898.240 - 5923.446: 1.5923% ( 17) 00:08:40.709 5923.446 - 5948.652: 1.6997% ( 18) 00:08:40.709 5948.652 - 5973.858: 1.8070% ( 18) 00:08:40.709 5973.858 - 5999.065: 1.9561% ( 25) 00:08:40.709 5999.065 - 6024.271: 2.2960% ( 57) 00:08:40.709 6024.271 - 6049.477: 2.6240% ( 55) 00:08:40.709 6049.477 - 6074.683: 3.0296% ( 68) 00:08:40.709 6074.683 - 6099.889: 3.7452% ( 120) 00:08:40.709 6099.889 - 6125.095: 4.0196% ( 46) 00:08:40.709 6125.095 - 6150.302: 4.5682% ( 92) 00:08:40.709 6150.302 - 6175.508: 5.0215% ( 76) 00:08:40.709 6175.508 - 6200.714: 5.4389% ( 70) 00:08:40.709 6200.714 - 6225.920: 5.9936% ( 93) 00:08:40.709 6225.920 - 6251.126: 6.3514% ( 60) 00:08:40.709 6251.126 - 6276.332: 6.6496% ( 50) 00:08:40.709 6276.332 - 6301.538: 6.9835% ( 56) 00:08:40.709 6301.538 - 6326.745: 7.3533% ( 62) 00:08:40.709 6326.745 - 6351.951: 7.9377% ( 98) 00:08:40.709 6351.951 - 6377.157: 8.4983% ( 94) 00:08:40.709 6377.157 - 6402.363: 9.3690% ( 146) 00:08:40.709 6402.363 - 6427.569: 10.4425% ( 180) 00:08:40.709 6427.569 - 6452.775: 11.7665% ( 222) 00:08:40.709 6452.775 - 6503.188: 14.3368% ( 431) 00:08:40.709 6503.188 - 6553.600: 17.2233% ( 484) 00:08:40.709 6553.600 - 6604.012: 20.7001% ( 583) 00:08:40.709 6604.012 - 6654.425: 24.6601% ( 664) 00:08:40.709 6654.425 - 6704.837: 29.1269% ( 749) 00:08:40.709 6704.837 - 6755.249: 32.3235% ( 536) 00:08:40.709 6755.249 - 6805.662: 35.3232% ( 503) 00:08:40.709 6805.662 - 6856.074: 39.1758% ( 646) 00:08:40.709 6856.074 - 6906.486: 42.0503% ( 482) 00:08:40.709 6906.486 - 6956.898: 44.5611% ( 421) 00:08:40.709 6956.898 - 7007.311: 46.9645% ( 403) 00:08:40.709 7007.311 - 7057.723: 49.7078% ( 460) 00:08:40.709 7057.723 - 7108.135: 52.0456% ( 392) 00:08:40.709 7108.135 - 7158.548: 53.5902% ( 259) 00:08:40.709 7158.548 - 7208.960: 54.7889% ( 201) 00:08:40.709 7208.960 - 7259.372: 55.9697% ( 198) 00:08:40.709 7259.372 - 7309.785: 56.8881% ( 154) 00:08:40.709 7309.785 - 7360.197: 57.4249% ( 90) 00:08:40.709 7360.197 - 7410.609: 58.2180% ( 133) 00:08:40.709 7410.609 - 7461.022: 58.9396% ( 121) 00:08:40.709 7461.022 - 7511.434: 59.6255% ( 115) 00:08:40.709 7511.434 - 7561.846: 60.3769% ( 126) 00:08:40.709 7561.846 - 7612.258: 61.3669% ( 166) 00:08:40.709 7612.258 - 7662.671: 62.5060% ( 191) 00:08:40.709 7662.671 - 7713.083: 63.6450% ( 191) 00:08:40.709 7713.083 - 7763.495: 65.1300% ( 249) 00:08:40.709 7763.495 - 7813.908: 66.7820% ( 277) 00:08:40.709 7813.908 - 7864.320: 68.5413% ( 295) 00:08:40.709 7864.320 - 7914.732: 70.7180% ( 365) 00:08:40.709 7914.732 - 7965.145: 72.6384% ( 322) 00:08:40.709 7965.145 - 8015.557: 74.4633% ( 306) 00:08:40.709 8015.557 - 8065.969: 76.0317% ( 263) 00:08:40.709 8065.969 - 8116.382: 77.5286% ( 251) 00:08:40.709 8116.382 - 8166.794: 78.9838% ( 244) 00:08:40.709 8166.794 - 8217.206: 80.1169% ( 190) 00:08:40.709 8217.206 - 8267.618: 81.1248% ( 169) 00:08:40.709 8267.618 - 8318.031: 82.1207% ( 167) 00:08:40.709 8318.031 - 8368.443: 82.9676% ( 142) 00:08:40.709 8368.443 - 8418.855: 83.6951% ( 122) 00:08:40.709 8418.855 - 8469.268: 84.4585% ( 128) 00:08:40.709 8469.268 - 8519.680: 85.0906% ( 106) 00:08:40.709 8519.680 - 8570.092: 85.9136% ( 138) 00:08:40.709 8570.092 - 8620.505: 86.4862% ( 96) 00:08:40.709 8620.505 - 8670.917: 87.0348% ( 92) 00:08:40.709 8670.917 - 8721.329: 87.4940% ( 77) 00:08:40.709 8721.329 - 8771.742: 88.0844% ( 99) 00:08:40.709 8771.742 - 8822.154: 88.5019% ( 70) 00:08:40.709 8822.154 - 8872.566: 88.9134% ( 69) 00:08:40.709 8872.566 - 8922.978: 89.1341% ( 37) 00:08:40.709 8922.978 - 8973.391: 89.3667% ( 39) 00:08:40.709 8973.391 - 9023.803: 89.4919% ( 21) 00:08:40.709 9023.803 - 9074.215: 89.5873% ( 16) 00:08:40.709 9074.215 - 9124.628: 89.6768% ( 15) 00:08:40.709 9124.628 - 9175.040: 89.7304% ( 9) 00:08:40.709 9175.040 - 9225.452: 89.8259% ( 16) 00:08:40.709 9225.452 - 9275.865: 89.9451% ( 20) 00:08:40.709 9275.865 - 9326.277: 90.0883% ( 24) 00:08:40.709 9326.277 - 9376.689: 90.2433% ( 26) 00:08:40.709 9376.689 - 9427.102: 90.3626% ( 20) 00:08:40.709 9427.102 - 9477.514: 90.4699% ( 18) 00:08:40.709 9477.514 - 9527.926: 90.6310% ( 27) 00:08:40.709 9527.926 - 9578.338: 90.8158% ( 31) 00:08:40.709 9578.338 - 9628.751: 91.0126% ( 33) 00:08:40.709 9628.751 - 9679.163: 91.2273% ( 36) 00:08:40.709 9679.163 - 9729.575: 91.5196% ( 49) 00:08:40.709 9729.575 - 9779.988: 91.7402% ( 37) 00:08:40.709 9779.988 - 9830.400: 91.9847% ( 41) 00:08:40.709 9830.400 - 9880.812: 92.1875% ( 34) 00:08:40.709 9880.812 - 9931.225: 92.3903% ( 34) 00:08:40.709 9931.225 - 9981.637: 92.7719% ( 64) 00:08:40.709 9981.637 - 10032.049: 92.9389% ( 28) 00:08:40.709 10032.049 - 10082.462: 93.0880% ( 25) 00:08:40.709 10082.462 - 10132.874: 93.2610% ( 29) 00:08:40.709 10132.874 - 10183.286: 93.4339% ( 29) 00:08:40.709 10183.286 - 10233.698: 93.5711% ( 23) 00:08:40.709 10233.698 - 10284.111: 93.9528% ( 64) 00:08:40.709 10284.111 - 10334.523: 94.2152% ( 44) 00:08:40.709 10334.523 - 10384.935: 94.3344% ( 20) 00:08:40.709 10384.935 - 10435.348: 94.4597% ( 21) 00:08:40.709 10435.348 - 10485.760: 94.5790% ( 20) 00:08:40.709 10485.760 - 10536.172: 94.6982% ( 20) 00:08:40.709 10536.172 - 10586.585: 94.8235% ( 21) 00:08:40.709 10586.585 - 10636.997: 94.9070% ( 14) 00:08:40.709 10636.997 - 10687.409: 94.9368% ( 5) 00:08:40.709 10687.409 - 10737.822: 94.9726% ( 6) 00:08:40.709 10737.822 - 10788.234: 95.0143% ( 7) 00:08:40.709 10788.234 - 10838.646: 95.0501% ( 6) 00:08:40.709 10838.646 - 10889.058: 95.1038% ( 9) 00:08:40.709 10889.058 - 10939.471: 95.1574% ( 9) 00:08:40.709 10939.471 - 10989.883: 95.2230% ( 11) 00:08:40.709 10989.883 - 11040.295: 95.2827% ( 10) 00:08:40.709 11040.295 - 11090.708: 95.3781% ( 16) 00:08:40.709 11090.708 - 11141.120: 95.4556% ( 13) 00:08:40.709 11141.120 - 11191.532: 95.6167% ( 27) 00:08:40.709 11191.532 - 11241.945: 95.7180% ( 17) 00:08:40.709 11241.945 - 11292.357: 95.8552% ( 23) 00:08:40.709 11292.357 - 11342.769: 95.9625% ( 18) 00:08:40.709 11342.769 - 11393.182: 96.0759% ( 19) 00:08:40.709 11393.182 - 11443.594: 96.1832% ( 18) 00:08:40.709 11443.594 - 11494.006: 96.2965% ( 19) 00:08:40.709 11494.006 - 11544.418: 96.4098% ( 19) 00:08:40.709 11544.418 - 11594.831: 96.5470% ( 23) 00:08:40.709 11594.831 - 11645.243: 96.6365% ( 15) 00:08:40.709 11645.243 - 11695.655: 96.6961% ( 10) 00:08:40.709 11695.655 - 11746.068: 96.7736% ( 13) 00:08:40.709 11746.068 - 11796.480: 96.8273% ( 9) 00:08:40.709 11796.480 - 11846.892: 96.8571% ( 5) 00:08:40.709 11846.892 - 11897.305: 96.8869% ( 5) 00:08:40.709 11897.305 - 11947.717: 96.9227% ( 6) 00:08:40.709 11947.717 - 11998.129: 97.0062% ( 14) 00:08:40.709 11998.129 - 12048.542: 97.0837% ( 13) 00:08:40.709 12048.542 - 12098.954: 97.1613% ( 13) 00:08:40.709 12098.954 - 12149.366: 97.2388% ( 13) 00:08:40.709 12149.366 - 12199.778: 97.3044% ( 11) 00:08:40.709 12199.778 - 12250.191: 97.4237% ( 20) 00:08:40.709 12250.191 - 12300.603: 97.5370% ( 19) 00:08:40.709 12300.603 - 12351.015: 97.6205% ( 14) 00:08:40.709 12351.015 - 12401.428: 97.6682% ( 8) 00:08:40.709 12401.428 - 12451.840: 97.7159% ( 8) 00:08:40.709 12451.840 - 12502.252: 97.7696% ( 9) 00:08:40.709 12502.252 - 12552.665: 97.8292% ( 10) 00:08:40.709 12552.665 - 12603.077: 97.8888% ( 10) 00:08:40.709 12603.077 - 12653.489: 97.9604% ( 12) 00:08:40.710 12653.489 - 12703.902: 98.0260% ( 11) 00:08:40.710 12703.902 - 12754.314: 98.1035% ( 13) 00:08:40.710 12754.314 - 12804.726: 98.1930% ( 15) 00:08:40.710 12804.726 - 12855.138: 98.2765% ( 14) 00:08:40.710 12855.138 - 12905.551: 98.3719% ( 16) 00:08:40.710 12905.551 - 13006.375: 98.4554% ( 14) 00:08:40.710 13006.375 - 13107.200: 98.5210% ( 11) 00:08:40.710 13107.200 - 13208.025: 98.5448% ( 4) 00:08:40.710 13208.025 - 13308.849: 98.5926% ( 8) 00:08:40.710 13308.849 - 13409.674: 98.6343% ( 7) 00:08:40.710 13409.674 - 13510.498: 98.7118% ( 13) 00:08:40.710 13510.498 - 13611.323: 98.8192% ( 18) 00:08:40.710 13611.323 - 13712.148: 98.8967% ( 13) 00:08:40.710 13712.148 - 13812.972: 98.9683% ( 12) 00:08:40.710 13812.972 - 13913.797: 99.2247% ( 43) 00:08:40.710 13913.797 - 14014.622: 99.2366% ( 2) 00:08:40.710 16434.412 - 16535.237: 99.2426% ( 1) 00:08:40.710 16736.886 - 16837.711: 99.2486% ( 1) 00:08:40.710 16837.711 - 16938.535: 99.2784% ( 5) 00:08:40.710 16938.535 - 17039.360: 99.3201% ( 7) 00:08:40.710 17039.360 - 17140.185: 99.3619% ( 7) 00:08:40.710 17140.185 - 17241.009: 99.3977% ( 6) 00:08:40.710 17241.009 - 17341.834: 99.4394% ( 7) 00:08:40.710 17341.834 - 17442.658: 99.5050% ( 11) 00:08:40.710 17442.658 - 17543.483: 99.5289% ( 4) 00:08:40.710 17543.483 - 17644.308: 99.5587% ( 5) 00:08:40.710 17644.308 - 17745.132: 99.5885% ( 5) 00:08:40.710 17745.132 - 17845.957: 99.6183% ( 5) 00:08:40.710 20568.222 - 20669.046: 99.6243% ( 1) 00:08:40.710 20669.046 - 20769.871: 99.6660% ( 7) 00:08:40.710 20769.871 - 20870.695: 99.7078% ( 7) 00:08:40.710 20870.695 - 20971.520: 99.7436% ( 6) 00:08:40.710 20971.520 - 21072.345: 99.7793% ( 6) 00:08:40.710 21072.345 - 21173.169: 99.8927% ( 19) 00:08:40.710 21173.169 - 21273.994: 99.9284% ( 6) 00:08:40.710 21273.994 - 21374.818: 99.9702% ( 7) 00:08:40.710 21374.818 - 21475.643: 100.0000% ( 5) 00:08:40.710 00:08:40.710 17:43:00 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:40.710 00:08:40.710 real 0m2.458s 00:08:40.710 user 0m2.184s 00:08:40.710 sys 0m0.174s 00:08:40.710 17:43:00 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:40.710 17:43:00 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:40.710 ************************************ 00:08:40.710 END TEST nvme_perf 00:08:40.710 ************************************ 00:08:40.710 17:43:00 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:40.710 17:43:00 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:40.710 17:43:00 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.710 17:43:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.710 ************************************ 00:08:40.710 START TEST nvme_hello_world 00:08:40.710 ************************************ 00:08:40.710 17:43:00 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:40.968 Initializing NVMe Controllers 00:08:40.968 Attached to 0000:00:10.0 00:08:40.968 Namespace ID: 1 size: 6GB 00:08:40.968 Attached to 0000:00:11.0 00:08:40.968 Namespace ID: 1 size: 5GB 00:08:40.968 Attached to 0000:00:13.0 00:08:40.968 Namespace ID: 1 size: 1GB 00:08:40.968 Attached to 0000:00:12.0 00:08:40.968 Namespace ID: 1 size: 4GB 00:08:40.968 Namespace ID: 2 size: 4GB 00:08:40.968 Namespace ID: 3 size: 4GB 00:08:40.968 Initialization complete. 00:08:40.968 INFO: using host memory buffer for IO 00:08:40.968 Hello world! 00:08:40.968 INFO: using host memory buffer for IO 00:08:40.968 Hello world! 00:08:40.968 INFO: using host memory buffer for IO 00:08:40.968 Hello world! 00:08:40.968 INFO: using host memory buffer for IO 00:08:40.968 Hello world! 00:08:40.968 INFO: using host memory buffer for IO 00:08:40.968 Hello world! 00:08:40.968 INFO: using host memory buffer for IO 00:08:40.968 Hello world! 00:08:40.968 00:08:40.968 real 0m0.199s 00:08:40.968 user 0m0.067s 00:08:40.968 sys 0m0.089s 00:08:40.968 17:43:00 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:40.968 ************************************ 00:08:40.968 END TEST nvme_hello_world 00:08:40.968 ************************************ 00:08:40.968 17:43:00 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:40.968 17:43:00 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:40.968 17:43:00 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:40.968 17:43:00 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.969 17:43:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.969 ************************************ 00:08:40.969 START TEST nvme_sgl 00:08:40.969 ************************************ 00:08:40.969 17:43:00 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:41.227 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:41.227 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:41.227 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:41.227 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:41.227 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:41.227 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:41.227 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:41.227 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:41.227 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:41.227 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:41.227 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:41.227 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:41.227 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:41.227 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:41.227 NVMe Readv/Writev Request test 00:08:41.227 Attached to 0000:00:10.0 00:08:41.227 Attached to 0000:00:11.0 00:08:41.227 Attached to 0000:00:13.0 00:08:41.227 Attached to 0000:00:12.0 00:08:41.227 0000:00:10.0: build_io_request_2 test passed 00:08:41.227 0000:00:10.0: build_io_request_4 test passed 00:08:41.227 0000:00:10.0: build_io_request_5 test passed 00:08:41.227 0000:00:10.0: build_io_request_6 test passed 00:08:41.227 0000:00:10.0: build_io_request_7 test passed 00:08:41.227 0000:00:10.0: build_io_request_10 test passed 00:08:41.227 0000:00:11.0: build_io_request_2 test passed 00:08:41.227 0000:00:11.0: build_io_request_4 test passed 00:08:41.227 0000:00:11.0: build_io_request_5 test passed 00:08:41.227 0000:00:11.0: build_io_request_6 test passed 00:08:41.227 0000:00:11.0: build_io_request_7 test passed 00:08:41.227 0000:00:11.0: build_io_request_10 test passed 00:08:41.227 Cleaning up... 00:08:41.227 ************************************ 00:08:41.227 END TEST nvme_sgl 00:08:41.227 ************************************ 00:08:41.227 00:08:41.227 real 0m0.259s 00:08:41.227 user 0m0.116s 00:08:41.227 sys 0m0.098s 00:08:41.227 17:43:01 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.227 17:43:01 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:41.227 17:43:01 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:41.227 17:43:01 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:41.227 17:43:01 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.227 17:43:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.227 ************************************ 00:08:41.227 START TEST nvme_e2edp 00:08:41.227 ************************************ 00:08:41.227 17:43:01 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:41.486 NVMe Write/Read with End-to-End data protection test 00:08:41.486 Attached to 0000:00:10.0 00:08:41.486 Attached to 0000:00:11.0 00:08:41.486 Attached to 0000:00:13.0 00:08:41.486 Attached to 0000:00:12.0 00:08:41.486 Cleaning up... 00:08:41.486 00:08:41.486 real 0m0.200s 00:08:41.486 user 0m0.066s 00:08:41.486 sys 0m0.093s 00:08:41.486 ************************************ 00:08:41.486 END TEST nvme_e2edp 00:08:41.486 ************************************ 00:08:41.486 17:43:01 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.486 17:43:01 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:41.486 17:43:01 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:41.486 17:43:01 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:41.486 17:43:01 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.486 17:43:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.486 ************************************ 00:08:41.486 START TEST nvme_reserve 00:08:41.486 ************************************ 00:08:41.486 17:43:01 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:41.746 ===================================================== 00:08:41.746 NVMe Controller at PCI bus 0, device 16, function 0 00:08:41.746 ===================================================== 00:08:41.746 Reservations: Not Supported 00:08:41.746 ===================================================== 00:08:41.746 NVMe Controller at PCI bus 0, device 17, function 0 00:08:41.746 ===================================================== 00:08:41.746 Reservations: Not Supported 00:08:41.746 ===================================================== 00:08:41.746 NVMe Controller at PCI bus 0, device 19, function 0 00:08:41.746 ===================================================== 00:08:41.746 Reservations: Not Supported 00:08:41.746 ===================================================== 00:08:41.746 NVMe Controller at PCI bus 0, device 18, function 0 00:08:41.746 ===================================================== 00:08:41.746 Reservations: Not Supported 00:08:41.746 Reservation test passed 00:08:41.746 00:08:41.746 real 0m0.205s 00:08:41.746 user 0m0.071s 00:08:41.746 sys 0m0.088s 00:08:41.746 17:43:01 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.746 17:43:01 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:41.746 ************************************ 00:08:41.746 END TEST nvme_reserve 00:08:41.746 ************************************ 00:08:41.746 17:43:01 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:41.746 17:43:01 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:41.746 17:43:01 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.746 17:43:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.746 ************************************ 00:08:41.746 START TEST nvme_err_injection 00:08:41.746 ************************************ 00:08:41.746 17:43:01 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:42.004 NVMe Error Injection test 00:08:42.004 Attached to 0000:00:10.0 00:08:42.004 Attached to 0000:00:11.0 00:08:42.004 Attached to 0000:00:13.0 00:08:42.004 Attached to 0000:00:12.0 00:08:42.004 0000:00:10.0: get features failed as expected 00:08:42.004 0000:00:11.0: get features failed as expected 00:08:42.004 0000:00:13.0: get features failed as expected 00:08:42.004 0000:00:12.0: get features failed as expected 00:08:42.004 0000:00:10.0: get features successfully as expected 00:08:42.004 0000:00:11.0: get features successfully as expected 00:08:42.004 0000:00:13.0: get features successfully as expected 00:08:42.004 0000:00:12.0: get features successfully as expected 00:08:42.004 0000:00:10.0: read failed as expected 00:08:42.004 0000:00:11.0: read failed as expected 00:08:42.004 0000:00:13.0: read failed as expected 00:08:42.004 0000:00:12.0: read failed as expected 00:08:42.004 0000:00:10.0: read successfully as expected 00:08:42.004 0000:00:11.0: read successfully as expected 00:08:42.004 0000:00:13.0: read successfully as expected 00:08:42.004 0000:00:12.0: read successfully as expected 00:08:42.004 Cleaning up... 00:08:42.004 00:08:42.004 real 0m0.201s 00:08:42.004 user 0m0.076s 00:08:42.004 sys 0m0.081s 00:08:42.004 17:43:01 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.004 17:43:01 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:42.004 ************************************ 00:08:42.004 END TEST nvme_err_injection 00:08:42.004 ************************************ 00:08:42.004 17:43:01 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:42.004 17:43:01 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:08:42.004 17:43:01 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.004 17:43:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.004 ************************************ 00:08:42.004 START TEST nvme_overhead 00:08:42.004 ************************************ 00:08:42.004 17:43:01 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:43.376 Initializing NVMe Controllers 00:08:43.376 Attached to 0000:00:10.0 00:08:43.376 Attached to 0000:00:11.0 00:08:43.376 Attached to 0000:00:13.0 00:08:43.376 Attached to 0000:00:12.0 00:08:43.376 Initialization complete. Launching workers. 00:08:43.376 submit (in ns) avg, min, max = 11509.8, 10410.0, 58795.4 00:08:43.376 complete (in ns) avg, min, max = 7633.0, 7142.3, 1283069.2 00:08:43.376 00:08:43.376 Submit histogram 00:08:43.376 ================ 00:08:43.376 Range in us Cumulative Count 00:08:43.376 10.388 - 10.437: 0.0091% ( 1) 00:08:43.376 10.486 - 10.535: 0.0273% ( 2) 00:08:43.376 10.831 - 10.880: 0.0637% ( 4) 00:08:43.376 10.880 - 10.929: 0.2730% ( 23) 00:08:43.376 10.929 - 10.978: 1.1741% ( 99) 00:08:43.376 10.978 - 11.028: 4.0229% ( 313) 00:08:43.376 11.028 - 11.077: 8.9833% ( 545) 00:08:43.376 11.077 - 11.126: 17.1293% ( 895) 00:08:43.376 11.126 - 11.175: 28.0877% ( 1204) 00:08:43.376 11.175 - 11.225: 40.5206% ( 1366) 00:08:43.376 11.225 - 11.274: 52.2254% ( 1286) 00:08:43.377 11.274 - 11.323: 62.0916% ( 1084) 00:08:43.377 11.323 - 11.372: 69.1454% ( 775) 00:08:43.377 11.372 - 11.422: 74.3151% ( 568) 00:08:43.377 11.422 - 11.471: 77.8101% ( 384) 00:08:43.377 11.471 - 11.520: 80.4587% ( 291) 00:08:43.377 11.520 - 11.569: 82.3792% ( 211) 00:08:43.377 11.569 - 11.618: 84.0448% ( 183) 00:08:43.377 11.618 - 11.668: 85.2644% ( 134) 00:08:43.377 11.668 - 11.717: 86.1018% ( 92) 00:08:43.377 11.717 - 11.766: 86.9391% ( 92) 00:08:43.377 11.766 - 11.815: 87.7583% ( 90) 00:08:43.377 11.815 - 11.865: 88.6684% ( 100) 00:08:43.377 11.865 - 11.914: 89.6059% ( 103) 00:08:43.377 11.914 - 11.963: 90.4159% ( 89) 00:08:43.377 11.963 - 12.012: 91.3625% ( 104) 00:08:43.377 12.012 - 12.062: 92.1999% ( 92) 00:08:43.377 12.062 - 12.111: 92.9462% ( 82) 00:08:43.377 12.111 - 12.160: 93.5924% ( 71) 00:08:43.377 12.160 - 12.209: 94.1749% ( 64) 00:08:43.377 12.209 - 12.258: 94.6664% ( 54) 00:08:43.377 12.258 - 12.308: 94.9941% ( 36) 00:08:43.377 12.308 - 12.357: 95.1852% ( 21) 00:08:43.377 12.357 - 12.406: 95.4219% ( 26) 00:08:43.377 12.406 - 12.455: 95.5766% ( 17) 00:08:43.377 12.455 - 12.505: 95.6767% ( 11) 00:08:43.377 12.505 - 12.554: 95.7404% ( 7) 00:08:43.377 12.554 - 12.603: 95.7859% ( 5) 00:08:43.377 12.603 - 12.702: 95.8314% ( 5) 00:08:43.377 12.702 - 12.800: 95.8405% ( 1) 00:08:43.377 12.800 - 12.898: 95.8769% ( 4) 00:08:43.377 12.898 - 12.997: 95.9680% ( 10) 00:08:43.377 12.997 - 13.095: 96.0226% ( 6) 00:08:43.377 13.095 - 13.194: 96.1409% ( 13) 00:08:43.377 13.194 - 13.292: 96.3138% ( 19) 00:08:43.377 13.292 - 13.391: 96.4868% ( 19) 00:08:43.377 13.391 - 13.489: 96.6233% ( 15) 00:08:43.377 13.489 - 13.588: 96.8235% ( 22) 00:08:43.377 13.588 - 13.686: 97.0875% ( 29) 00:08:43.377 13.686 - 13.785: 97.2604% ( 19) 00:08:43.377 13.785 - 13.883: 97.4151% ( 17) 00:08:43.377 13.883 - 13.982: 97.5152% ( 11) 00:08:43.377 13.982 - 14.080: 97.6518% ( 15) 00:08:43.377 14.080 - 14.178: 97.7610% ( 12) 00:08:43.377 14.178 - 14.277: 97.8429% ( 9) 00:08:43.377 14.277 - 14.375: 97.9157% ( 8) 00:08:43.377 14.375 - 14.474: 97.9612% ( 5) 00:08:43.377 14.474 - 14.572: 97.9703% ( 1) 00:08:43.377 14.572 - 14.671: 98.0067% ( 4) 00:08:43.377 14.671 - 14.769: 98.0522% ( 5) 00:08:43.377 14.769 - 14.868: 98.0978% ( 5) 00:08:43.377 14.868 - 14.966: 98.1251% ( 3) 00:08:43.377 14.966 - 15.065: 98.1888% ( 7) 00:08:43.377 15.065 - 15.163: 98.2161% ( 3) 00:08:43.377 15.163 - 15.262: 98.2434% ( 3) 00:08:43.377 15.262 - 15.360: 98.3071% ( 7) 00:08:43.377 15.360 - 15.458: 98.3708% ( 7) 00:08:43.377 15.458 - 15.557: 98.4072% ( 4) 00:08:43.377 15.557 - 15.655: 98.4345% ( 3) 00:08:43.377 15.655 - 15.754: 98.4618% ( 3) 00:08:43.377 15.754 - 15.852: 98.5073% ( 5) 00:08:43.377 15.852 - 15.951: 98.5164% ( 1) 00:08:43.377 16.148 - 16.246: 98.5437% ( 3) 00:08:43.377 16.246 - 16.345: 98.5801% ( 4) 00:08:43.377 16.345 - 16.443: 98.5983% ( 2) 00:08:43.377 16.443 - 16.542: 98.6074% ( 1) 00:08:43.377 16.542 - 16.640: 98.6530% ( 5) 00:08:43.377 16.640 - 16.738: 98.7167% ( 7) 00:08:43.377 16.738 - 16.837: 98.7986% ( 9) 00:08:43.377 16.837 - 16.935: 98.8350% ( 4) 00:08:43.377 16.935 - 17.034: 98.8987% ( 7) 00:08:43.377 17.034 - 17.132: 98.9715% ( 8) 00:08:43.377 17.132 - 17.231: 99.0989% ( 14) 00:08:43.377 17.231 - 17.329: 99.2082% ( 12) 00:08:43.377 17.329 - 17.428: 99.2628% ( 6) 00:08:43.377 17.428 - 17.526: 99.3447% ( 9) 00:08:43.377 17.526 - 17.625: 99.4084% ( 7) 00:08:43.377 17.625 - 17.723: 99.4903% ( 9) 00:08:43.377 17.723 - 17.822: 99.5540% ( 7) 00:08:43.377 17.822 - 17.920: 99.5813% ( 3) 00:08:43.377 17.920 - 18.018: 99.6268% ( 5) 00:08:43.377 18.018 - 18.117: 99.6450% ( 2) 00:08:43.377 18.117 - 18.215: 99.6632% ( 2) 00:08:43.377 18.215 - 18.314: 99.6996% ( 4) 00:08:43.377 18.314 - 18.412: 99.7087% ( 1) 00:08:43.377 18.609 - 18.708: 99.7178% ( 1) 00:08:43.377 18.708 - 18.806: 99.7270% ( 1) 00:08:43.377 18.905 - 19.003: 99.7543% ( 3) 00:08:43.377 19.003 - 19.102: 99.7634% ( 1) 00:08:43.377 19.102 - 19.200: 99.7725% ( 1) 00:08:43.377 19.200 - 19.298: 99.7816% ( 1) 00:08:43.377 19.298 - 19.397: 99.7907% ( 1) 00:08:43.377 19.495 - 19.594: 99.7998% ( 1) 00:08:43.377 19.791 - 19.889: 99.8089% ( 1) 00:08:43.377 20.086 - 20.185: 99.8180% ( 1) 00:08:43.377 20.185 - 20.283: 99.8271% ( 1) 00:08:43.377 20.283 - 20.382: 99.8453% ( 2) 00:08:43.377 20.775 - 20.874: 99.8635% ( 2) 00:08:43.377 21.071 - 21.169: 99.8726% ( 1) 00:08:43.377 21.366 - 21.465: 99.8817% ( 1) 00:08:43.377 21.858 - 21.957: 99.8908% ( 1) 00:08:43.377 21.957 - 22.055: 99.8999% ( 1) 00:08:43.377 22.154 - 22.252: 99.9090% ( 1) 00:08:43.377 22.843 - 22.942: 99.9181% ( 1) 00:08:43.377 23.434 - 23.532: 99.9272% ( 1) 00:08:43.377 23.828 - 23.926: 99.9363% ( 1) 00:08:43.377 23.926 - 24.025: 99.9454% ( 1) 00:08:43.377 26.388 - 26.585: 99.9545% ( 1) 00:08:43.377 27.175 - 27.372: 99.9636% ( 1) 00:08:43.377 32.098 - 32.295: 99.9727% ( 1) 00:08:43.377 49.034 - 49.231: 99.9818% ( 1) 00:08:43.377 57.895 - 58.289: 99.9909% ( 1) 00:08:43.377 58.683 - 59.077: 100.0000% ( 1) 00:08:43.377 00:08:43.377 Complete histogram 00:08:43.377 ================== 00:08:43.377 Range in us Cumulative Count 00:08:43.377 7.138 - 7.188: 0.1729% ( 19) 00:08:43.377 7.188 - 7.237: 4.2050% ( 443) 00:08:43.377 7.237 - 7.286: 19.1863% ( 1646) 00:08:43.377 7.286 - 7.335: 41.0212% ( 2399) 00:08:43.377 7.335 - 7.385: 60.5807% ( 2149) 00:08:43.377 7.385 - 7.434: 73.5779% ( 1428) 00:08:43.377 7.434 - 7.483: 82.1334% ( 940) 00:08:43.377 7.483 - 7.532: 87.8857% ( 632) 00:08:43.377 7.532 - 7.582: 91.1532% ( 359) 00:08:43.377 7.582 - 7.631: 93.1555% ( 220) 00:08:43.377 7.631 - 7.680: 94.3843% ( 135) 00:08:43.377 7.680 - 7.729: 95.0669% ( 75) 00:08:43.377 7.729 - 7.778: 95.5129% ( 49) 00:08:43.377 7.778 - 7.828: 95.8496% ( 37) 00:08:43.377 7.828 - 7.877: 96.0135% ( 18) 00:08:43.377 7.877 - 7.926: 96.1591% ( 16) 00:08:43.377 7.926 - 7.975: 96.2774% ( 13) 00:08:43.377 7.975 - 8.025: 96.3502% ( 8) 00:08:43.377 8.025 - 8.074: 96.4048% ( 6) 00:08:43.377 8.074 - 8.123: 96.5414% ( 15) 00:08:43.377 8.123 - 8.172: 96.6415% ( 11) 00:08:43.377 8.172 - 8.222: 96.7871% ( 16) 00:08:43.377 8.222 - 8.271: 96.9418% ( 17) 00:08:43.377 8.271 - 8.320: 97.1876% ( 27) 00:08:43.377 8.320 - 8.369: 97.4060% ( 24) 00:08:43.377 8.369 - 8.418: 97.5426% ( 15) 00:08:43.377 8.418 - 8.468: 97.6882% ( 16) 00:08:43.377 8.468 - 8.517: 97.7519% ( 7) 00:08:43.377 8.517 - 8.566: 97.8156% ( 7) 00:08:43.377 8.566 - 8.615: 97.8520% ( 4) 00:08:43.377 8.615 - 8.665: 97.8884% ( 4) 00:08:43.377 8.665 - 8.714: 97.8975% ( 1) 00:08:43.377 8.714 - 8.763: 97.9521% ( 6) 00:08:43.377 8.812 - 8.862: 97.9703% ( 2) 00:08:43.377 8.862 - 8.911: 97.9885% ( 2) 00:08:43.377 8.911 - 8.960: 97.9976% ( 1) 00:08:43.377 8.960 - 9.009: 98.0067% ( 1) 00:08:43.377 9.255 - 9.305: 98.0249% ( 2) 00:08:43.378 9.305 - 9.354: 98.0340% ( 1) 00:08:43.378 9.403 - 9.452: 98.0431% ( 1) 00:08:43.378 9.649 - 9.698: 98.0522% ( 1) 00:08:43.378 9.797 - 9.846: 98.0613% ( 1) 00:08:43.378 9.846 - 9.895: 98.0704% ( 1) 00:08:43.378 9.895 - 9.945: 98.0795% ( 1) 00:08:43.378 9.994 - 10.043: 98.0887% ( 1) 00:08:43.378 10.043 - 10.092: 98.0978% ( 1) 00:08:43.378 10.092 - 10.142: 98.1251% ( 3) 00:08:43.378 10.191 - 10.240: 98.1342% ( 1) 00:08:43.378 10.240 - 10.289: 98.1706% ( 4) 00:08:43.378 10.289 - 10.338: 98.1797% ( 1) 00:08:43.378 10.338 - 10.388: 98.1888% ( 1) 00:08:43.378 10.388 - 10.437: 98.1979% ( 1) 00:08:43.378 10.535 - 10.585: 98.2070% ( 1) 00:08:43.378 10.585 - 10.634: 98.2161% ( 1) 00:08:43.378 10.634 - 10.683: 98.2343% ( 2) 00:08:43.378 10.683 - 10.732: 98.2525% ( 2) 00:08:43.378 10.732 - 10.782: 98.2616% ( 1) 00:08:43.378 10.782 - 10.831: 98.2707% ( 1) 00:08:43.378 10.880 - 10.929: 98.2798% ( 1) 00:08:43.378 10.929 - 10.978: 98.2889% ( 1) 00:08:43.378 11.077 - 11.126: 98.2980% ( 1) 00:08:43.378 11.126 - 11.175: 98.3071% ( 1) 00:08:43.378 11.471 - 11.520: 98.3162% ( 1) 00:08:43.378 11.569 - 11.618: 98.3253% ( 1) 00:08:43.378 11.766 - 11.815: 98.3344% ( 1) 00:08:43.378 11.914 - 11.963: 98.3435% ( 1) 00:08:43.378 12.111 - 12.160: 98.3617% ( 2) 00:08:43.378 12.160 - 12.209: 98.3708% ( 1) 00:08:43.378 12.209 - 12.258: 98.3799% ( 1) 00:08:43.378 12.308 - 12.357: 98.3890% ( 1) 00:08:43.378 12.455 - 12.505: 98.3981% ( 1) 00:08:43.378 12.702 - 12.800: 98.4163% ( 2) 00:08:43.378 12.800 - 12.898: 98.4345% ( 2) 00:08:43.378 12.898 - 12.997: 98.4800% ( 5) 00:08:43.378 12.997 - 13.095: 98.5710% ( 10) 00:08:43.378 13.095 - 13.194: 98.6712% ( 11) 00:08:43.378 13.194 - 13.292: 98.7531% ( 9) 00:08:43.378 13.292 - 13.391: 98.8896% ( 15) 00:08:43.378 13.391 - 13.489: 98.9715% ( 9) 00:08:43.378 13.489 - 13.588: 99.0807% ( 12) 00:08:43.378 13.588 - 13.686: 99.1626% ( 9) 00:08:43.378 13.686 - 13.785: 99.2537% ( 10) 00:08:43.378 13.785 - 13.883: 99.3356% ( 9) 00:08:43.378 13.883 - 13.982: 99.4175% ( 9) 00:08:43.378 13.982 - 14.080: 99.4266% ( 1) 00:08:43.378 14.080 - 14.178: 99.4721% ( 5) 00:08:43.378 14.178 - 14.277: 99.5085% ( 4) 00:08:43.378 14.277 - 14.375: 99.5358% ( 3) 00:08:43.378 14.375 - 14.474: 99.5995% ( 7) 00:08:43.378 14.474 - 14.572: 99.6086% ( 1) 00:08:43.378 14.572 - 14.671: 99.6359% ( 3) 00:08:43.378 14.671 - 14.769: 99.6541% ( 2) 00:08:43.378 14.769 - 14.868: 99.6814% ( 3) 00:08:43.378 14.868 - 14.966: 99.7270% ( 5) 00:08:43.378 14.966 - 15.065: 99.7361% ( 1) 00:08:43.378 15.065 - 15.163: 99.7452% ( 1) 00:08:43.378 15.163 - 15.262: 99.7543% ( 1) 00:08:43.378 15.458 - 15.557: 99.7634% ( 1) 00:08:43.378 15.557 - 15.655: 99.7816% ( 2) 00:08:43.378 15.655 - 15.754: 99.7907% ( 1) 00:08:43.378 15.754 - 15.852: 99.7998% ( 1) 00:08:43.378 15.852 - 15.951: 99.8089% ( 1) 00:08:43.378 15.951 - 16.049: 99.8180% ( 1) 00:08:43.378 16.443 - 16.542: 99.8271% ( 1) 00:08:43.378 16.640 - 16.738: 99.8362% ( 1) 00:08:43.378 16.738 - 16.837: 99.8453% ( 1) 00:08:43.378 17.428 - 17.526: 99.8726% ( 3) 00:08:43.378 17.822 - 17.920: 99.8817% ( 1) 00:08:43.378 18.117 - 18.215: 99.8999% ( 2) 00:08:43.378 18.314 - 18.412: 99.9090% ( 1) 00:08:43.378 18.412 - 18.511: 99.9181% ( 1) 00:08:43.378 18.609 - 18.708: 99.9272% ( 1) 00:08:43.378 18.806 - 18.905: 99.9363% ( 1) 00:08:43.378 19.298 - 19.397: 99.9454% ( 1) 00:08:43.378 19.889 - 19.988: 99.9545% ( 1) 00:08:43.378 20.775 - 20.874: 99.9636% ( 1) 00:08:43.378 21.465 - 21.563: 99.9727% ( 1) 00:08:43.378 28.357 - 28.554: 99.9818% ( 1) 00:08:43.378 37.415 - 37.612: 99.9909% ( 1) 00:08:43.378 1279.212 - 1285.514: 100.0000% ( 1) 00:08:43.378 00:08:43.378 ************************************ 00:08:43.378 END TEST nvme_overhead 00:08:43.378 ************************************ 00:08:43.378 00:08:43.378 real 0m1.203s 00:08:43.378 user 0m1.065s 00:08:43.378 sys 0m0.097s 00:08:43.378 17:43:03 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:43.378 17:43:03 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:43.378 17:43:03 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:43.378 17:43:03 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:43.378 17:43:03 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:43.378 17:43:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.378 ************************************ 00:08:43.378 START TEST nvme_arbitration 00:08:43.378 ************************************ 00:08:43.378 17:43:03 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:46.678 Initializing NVMe Controllers 00:08:46.678 Attached to 0000:00:10.0 00:08:46.678 Attached to 0000:00:11.0 00:08:46.678 Attached to 0000:00:13.0 00:08:46.678 Attached to 0000:00:12.0 00:08:46.678 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:46.678 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:46.678 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:46.678 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:46.678 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:46.678 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:46.678 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:46.678 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:46.678 Initialization complete. Launching workers. 00:08:46.678 Starting thread on core 1 with urgent priority queue 00:08:46.678 Starting thread on core 2 with urgent priority queue 00:08:46.678 Starting thread on core 3 with urgent priority queue 00:08:46.678 Starting thread on core 0 with urgent priority queue 00:08:46.678 QEMU NVMe Ctrl (12340 ) core 0: 5702.00 IO/s 17.54 secs/100000 ios 00:08:46.678 QEMU NVMe Ctrl (12342 ) core 0: 5696.00 IO/s 17.56 secs/100000 ios 00:08:46.678 QEMU NVMe Ctrl (12341 ) core 1: 5690.33 IO/s 17.57 secs/100000 ios 00:08:46.678 QEMU NVMe Ctrl (12342 ) core 1: 5696.67 IO/s 17.55 secs/100000 ios 00:08:46.678 QEMU NVMe Ctrl (12343 ) core 2: 5499.33 IO/s 18.18 secs/100000 ios 00:08:46.678 QEMU NVMe Ctrl (12342 ) core 3: 5148.00 IO/s 19.43 secs/100000 ios 00:08:46.678 ======================================================== 00:08:46.678 00:08:46.678 00:08:46.678 real 0m3.227s 00:08:46.678 user 0m9.040s 00:08:46.678 sys 0m0.100s 00:08:46.678 ************************************ 00:08:46.678 END TEST nvme_arbitration 00:08:46.678 ************************************ 00:08:46.678 17:43:06 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.678 17:43:06 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:46.678 17:43:06 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:46.678 17:43:06 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:46.678 17:43:06 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.678 17:43:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.678 ************************************ 00:08:46.678 START TEST nvme_single_aen 00:08:46.678 ************************************ 00:08:46.678 17:43:06 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:46.678 Asynchronous Event Request test 00:08:46.678 Attached to 0000:00:10.0 00:08:46.678 Attached to 0000:00:11.0 00:08:46.678 Attached to 0000:00:13.0 00:08:46.678 Attached to 0000:00:12.0 00:08:46.678 Reset controller to setup AER completions for this process 00:08:46.678 Registering asynchronous event callbacks... 00:08:46.678 Getting orig temperature thresholds of all controllers 00:08:46.678 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.678 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.678 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.678 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:46.678 Setting all controllers temperature threshold low to trigger AER 00:08:46.678 Waiting for all controllers temperature threshold to be set lower 00:08:46.678 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.678 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:46.678 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.678 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:46.678 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.678 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:46.678 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:46.678 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:46.678 Waiting for all controllers to trigger AER and reset threshold 00:08:46.678 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.678 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.678 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.678 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:46.678 Cleaning up... 00:08:46.940 ************************************ 00:08:46.940 END TEST nvme_single_aen 00:08:46.940 ************************************ 00:08:46.940 00:08:46.940 real 0m0.233s 00:08:46.940 user 0m0.076s 00:08:46.940 sys 0m0.109s 00:08:46.940 17:43:06 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.940 17:43:06 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:46.940 17:43:06 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:46.940 17:43:06 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:46.940 17:43:06 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.940 17:43:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.940 ************************************ 00:08:46.940 START TEST nvme_doorbell_aers 00:08:46.940 ************************************ 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:46.940 17:43:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:47.201 [2024-11-05 17:43:07.040801] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:08:57.203 Executing: test_write_invalid_db 00:08:57.203 Waiting for AER completion... 00:08:57.203 Failure: test_write_invalid_db 00:08:57.203 00:08:57.203 Executing: test_invalid_db_write_overflow_sq 00:08:57.203 Waiting for AER completion... 00:08:57.203 Failure: test_invalid_db_write_overflow_sq 00:08:57.203 00:08:57.203 Executing: test_invalid_db_write_overflow_cq 00:08:57.203 Waiting for AER completion... 00:08:57.203 Failure: test_invalid_db_write_overflow_cq 00:08:57.203 00:08:57.203 17:43:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:57.203 17:43:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:57.203 [2024-11-05 17:43:17.052716] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:07.217 Executing: test_write_invalid_db 00:09:07.217 Waiting for AER completion... 00:09:07.217 Failure: test_write_invalid_db 00:09:07.217 00:09:07.217 Executing: test_invalid_db_write_overflow_sq 00:09:07.217 Waiting for AER completion... 00:09:07.217 Failure: test_invalid_db_write_overflow_sq 00:09:07.217 00:09:07.217 Executing: test_invalid_db_write_overflow_cq 00:09:07.217 Waiting for AER completion... 00:09:07.217 Failure: test_invalid_db_write_overflow_cq 00:09:07.217 00:09:07.217 17:43:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:07.217 17:43:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:07.217 [2024-11-05 17:43:27.057052] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:17.186 Executing: test_write_invalid_db 00:09:17.186 Waiting for AER completion... 00:09:17.186 Failure: test_write_invalid_db 00:09:17.186 00:09:17.186 Executing: test_invalid_db_write_overflow_sq 00:09:17.186 Waiting for AER completion... 00:09:17.186 Failure: test_invalid_db_write_overflow_sq 00:09:17.186 00:09:17.186 Executing: test_invalid_db_write_overflow_cq 00:09:17.186 Waiting for AER completion... 00:09:17.186 Failure: test_invalid_db_write_overflow_cq 00:09:17.186 00:09:17.186 17:43:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:17.186 17:43:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:17.186 [2024-11-05 17:43:37.112592] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 Executing: test_write_invalid_db 00:09:27.199 Waiting for AER completion... 00:09:27.199 Failure: test_write_invalid_db 00:09:27.199 00:09:27.199 Executing: test_invalid_db_write_overflow_sq 00:09:27.199 Waiting for AER completion... 00:09:27.199 Failure: test_invalid_db_write_overflow_sq 00:09:27.199 00:09:27.199 Executing: test_invalid_db_write_overflow_cq 00:09:27.199 Waiting for AER completion... 00:09:27.199 Failure: test_invalid_db_write_overflow_cq 00:09:27.199 00:09:27.199 00:09:27.199 real 0m40.191s 00:09:27.199 user 0m34.342s 00:09:27.199 sys 0m5.453s 00:09:27.199 17:43:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.199 17:43:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:27.199 ************************************ 00:09:27.199 END TEST nvme_doorbell_aers 00:09:27.199 ************************************ 00:09:27.199 17:43:46 nvme -- nvme/nvme.sh@97 -- # uname 00:09:27.199 17:43:46 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:27.199 17:43:46 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:27.199 17:43:46 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:09:27.199 17:43:46 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:27.199 17:43:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.199 ************************************ 00:09:27.199 START TEST nvme_multi_aen 00:09:27.199 ************************************ 00:09:27.199 17:43:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:27.199 [2024-11-05 17:43:47.163767] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 [2024-11-05 17:43:47.164004] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 [2024-11-05 17:43:47.164113] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 [2024-11-05 17:43:47.165401] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 [2024-11-05 17:43:47.165538] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 [2024-11-05 17:43:47.165618] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 [2024-11-05 17:43:47.166717] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 [2024-11-05 17:43:47.166854] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 [2024-11-05 17:43:47.166939] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 [2024-11-05 17:43:47.168074] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 [2024-11-05 17:43:47.168196] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 [2024-11-05 17:43:47.168282] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76210) is not found. Dropping the request. 00:09:27.199 Child process pid: 76736 00:09:27.457 [Child] Asynchronous Event Request test 00:09:27.457 [Child] Attached to 0000:00:10.0 00:09:27.457 [Child] Attached to 0000:00:11.0 00:09:27.457 [Child] Attached to 0000:00:13.0 00:09:27.457 [Child] Attached to 0000:00:12.0 00:09:27.457 [Child] Registering asynchronous event callbacks... 00:09:27.457 [Child] Getting orig temperature thresholds of all controllers 00:09:27.457 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:27.457 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:27.457 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:27.457 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:27.457 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:27.457 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:27.457 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:27.457 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:27.457 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:27.457 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:27.457 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:27.457 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:27.457 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:27.457 [Child] Cleaning up... 00:09:27.457 Asynchronous Event Request test 00:09:27.457 Attached to 0000:00:10.0 00:09:27.457 Attached to 0000:00:11.0 00:09:27.457 Attached to 0000:00:13.0 00:09:27.457 Attached to 0000:00:12.0 00:09:27.457 Reset controller to setup AER completions for this process 00:09:27.457 Registering asynchronous event callbacks... 00:09:27.457 Getting orig temperature thresholds of all controllers 00:09:27.457 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:27.457 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:27.457 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:27.457 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:27.457 Setting all controllers temperature threshold low to trigger AER 00:09:27.457 Waiting for all controllers temperature threshold to be set lower 00:09:27.457 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:27.457 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:27.457 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:27.457 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:27.458 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:27.458 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:27.458 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:27.458 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:27.458 Waiting for all controllers to trigger AER and reset threshold 00:09:27.458 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:27.458 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:27.458 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:27.458 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:27.458 Cleaning up... 00:09:27.458 ************************************ 00:09:27.458 END TEST nvme_multi_aen 00:09:27.458 ************************************ 00:09:27.458 00:09:27.458 real 0m0.435s 00:09:27.458 user 0m0.135s 00:09:27.458 sys 0m0.191s 00:09:27.458 17:43:47 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.458 17:43:47 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:27.716 17:43:47 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:27.716 17:43:47 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:27.716 17:43:47 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:27.716 17:43:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.716 ************************************ 00:09:27.716 START TEST nvme_startup 00:09:27.716 ************************************ 00:09:27.716 17:43:47 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:27.716 Initializing NVMe Controllers 00:09:27.716 Attached to 0000:00:10.0 00:09:27.716 Attached to 0000:00:11.0 00:09:27.716 Attached to 0000:00:13.0 00:09:27.716 Attached to 0000:00:12.0 00:09:27.716 Initialization complete. 00:09:27.716 Time used:158416.375 (us). 00:09:27.716 00:09:27.716 real 0m0.222s 00:09:27.716 user 0m0.080s 00:09:27.716 sys 0m0.100s 00:09:27.716 17:43:47 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.716 17:43:47 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:27.716 ************************************ 00:09:27.716 END TEST nvme_startup 00:09:27.716 ************************************ 00:09:27.974 17:43:47 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:27.974 17:43:47 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:27.974 17:43:47 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:27.974 17:43:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.974 ************************************ 00:09:27.974 START TEST nvme_multi_secondary 00:09:27.974 ************************************ 00:09:27.974 17:43:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:09:27.974 17:43:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76785 00:09:27.974 17:43:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76786 00:09:27.974 17:43:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:27.974 17:43:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:27.974 17:43:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:31.252 Initializing NVMe Controllers 00:09:31.252 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:31.252 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:31.252 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:31.252 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:31.252 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:31.252 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:31.252 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:31.252 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:31.252 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:31.252 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:31.252 Initialization complete. Launching workers. 00:09:31.252 ======================================================== 00:09:31.252 Latency(us) 00:09:31.252 Device Information : IOPS MiB/s Average min max 00:09:31.252 PCIE (0000:00:10.0) NSID 1 from core 1: 8019.42 31.33 1993.82 685.19 5736.81 00:09:31.252 PCIE (0000:00:11.0) NSID 1 from core 1: 8019.42 31.33 1994.72 692.46 5466.91 00:09:31.252 PCIE (0000:00:13.0) NSID 1 from core 1: 8018.76 31.32 1994.85 687.03 5957.11 00:09:31.252 PCIE (0000:00:12.0) NSID 1 from core 1: 8020.42 31.33 1994.53 695.81 6284.65 00:09:31.252 PCIE (0000:00:12.0) NSID 2 from core 1: 8020.76 31.33 1994.46 706.02 6080.05 00:09:31.252 PCIE (0000:00:12.0) NSID 3 from core 1: 8020.76 31.33 1994.47 701.33 6159.36 00:09:31.252 ======================================================== 00:09:31.252 Total : 48119.54 187.97 1994.48 685.19 6284.65 00:09:31.252 00:09:31.252 Initializing NVMe Controllers 00:09:31.252 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:31.252 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:31.252 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:31.252 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:31.252 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:31.252 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:31.252 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:31.252 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:31.252 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:31.252 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:31.252 Initialization complete. Launching workers. 00:09:31.252 ======================================================== 00:09:31.252 Latency(us) 00:09:31.252 Device Information : IOPS MiB/s Average min max 00:09:31.252 PCIE (0000:00:10.0) NSID 1 from core 2: 3226.37 12.60 4957.84 1060.22 13960.13 00:09:31.252 PCIE (0000:00:11.0) NSID 1 from core 2: 3226.37 12.60 4958.94 1082.55 14082.99 00:09:31.252 PCIE (0000:00:13.0) NSID 1 from core 2: 3226.37 12.60 4959.04 1064.71 16291.65 00:09:31.252 PCIE (0000:00:12.0) NSID 1 from core 2: 3226.37 12.60 4959.04 987.43 13166.80 00:09:31.252 PCIE (0000:00:12.0) NSID 2 from core 2: 3226.37 12.60 4959.02 1120.98 12845.16 00:09:31.252 PCIE (0000:00:12.0) NSID 3 from core 2: 3226.37 12.60 4959.00 1112.59 13605.63 00:09:31.252 ======================================================== 00:09:31.252 Total : 19358.24 75.62 4958.81 987.43 16291.65 00:09:31.252 00:09:31.252 17:43:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76785 00:09:33.148 Initializing NVMe Controllers 00:09:33.148 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:33.148 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:33.148 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:33.148 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:33.148 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:33.148 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:33.148 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:33.148 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:33.148 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:33.148 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:33.148 Initialization complete. Launching workers. 00:09:33.148 ======================================================== 00:09:33.148 Latency(us) 00:09:33.148 Device Information : IOPS MiB/s Average min max 00:09:33.148 PCIE (0000:00:10.0) NSID 1 from core 0: 10555.61 41.23 1514.49 681.55 7229.07 00:09:33.148 PCIE (0000:00:11.0) NSID 1 from core 0: 10554.01 41.23 1515.60 692.55 7130.96 00:09:33.148 PCIE (0000:00:13.0) NSID 1 from core 0: 10550.61 41.21 1516.07 645.28 6924.60 00:09:33.148 PCIE (0000:00:12.0) NSID 1 from core 0: 10553.81 41.23 1515.58 617.04 6944.53 00:09:33.148 PCIE (0000:00:12.0) NSID 2 from core 0: 10541.61 41.18 1517.31 523.45 7175.19 00:09:33.148 PCIE (0000:00:12.0) NSID 3 from core 0: 10552.21 41.22 1515.76 450.76 7317.11 00:09:33.148 ======================================================== 00:09:33.148 Total : 63307.84 247.30 1515.80 450.76 7317.11 00:09:33.148 00:09:33.148 17:43:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76786 00:09:33.148 17:43:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76852 00:09:33.148 17:43:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76854 00:09:33.148 17:43:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:33.148 17:43:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:33.148 17:43:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:36.426 Initializing NVMe Controllers 00:09:36.426 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:36.426 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:36.426 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:36.426 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:36.426 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:36.426 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:36.426 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:36.426 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:36.426 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:36.426 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:36.426 Initialization complete. Launching workers. 00:09:36.426 ======================================================== 00:09:36.426 Latency(us) 00:09:36.426 Device Information : IOPS MiB/s Average min max 00:09:36.426 PCIE (0000:00:10.0) NSID 1 from core 0: 7003.95 27.36 2283.05 812.13 6937.25 00:09:36.426 PCIE (0000:00:11.0) NSID 1 from core 0: 7003.95 27.36 2284.23 826.77 6944.62 00:09:36.426 PCIE (0000:00:13.0) NSID 1 from core 0: 7003.95 27.36 2284.41 816.52 6708.94 00:09:36.426 PCIE (0000:00:12.0) NSID 1 from core 0: 7003.95 27.36 2284.54 833.06 6729.56 00:09:36.426 PCIE (0000:00:12.0) NSID 2 from core 0: 7003.95 27.36 2284.63 812.88 6771.08 00:09:36.426 PCIE (0000:00:12.0) NSID 3 from core 0: 7003.95 27.36 2284.70 828.58 6799.56 00:09:36.426 ======================================================== 00:09:36.426 Total : 42023.68 164.16 2284.26 812.13 6944.62 00:09:36.426 00:09:36.426 Initializing NVMe Controllers 00:09:36.426 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:36.426 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:36.426 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:36.426 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:36.426 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:36.426 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:36.426 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:36.426 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:36.426 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:36.426 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:36.426 Initialization complete. Launching workers. 00:09:36.426 ======================================================== 00:09:36.426 Latency(us) 00:09:36.426 Device Information : IOPS MiB/s Average min max 00:09:36.426 PCIE (0000:00:10.0) NSID 1 from core 1: 7030.81 27.46 2274.26 821.88 8884.23 00:09:36.426 PCIE (0000:00:11.0) NSID 1 from core 1: 7030.81 27.46 2275.18 825.13 8693.29 00:09:36.426 PCIE (0000:00:13.0) NSID 1 from core 1: 7030.81 27.46 2275.10 837.86 8824.55 00:09:36.426 PCIE (0000:00:12.0) NSID 1 from core 1: 7030.81 27.46 2275.04 843.66 8231.05 00:09:36.426 PCIE (0000:00:12.0) NSID 2 from core 1: 7030.81 27.46 2274.95 649.15 8653.08 00:09:36.427 PCIE (0000:00:12.0) NSID 3 from core 1: 7030.81 27.46 2274.88 470.74 8490.44 00:09:36.427 ======================================================== 00:09:36.427 Total : 42184.87 164.78 2274.90 470.74 8884.23 00:09:36.427 00:09:38.323 Initializing NVMe Controllers 00:09:38.323 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:38.323 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:38.323 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:38.323 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:38.323 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:38.323 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:38.323 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:38.323 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:38.323 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:38.323 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:38.323 Initialization complete. Launching workers. 00:09:38.323 ======================================================== 00:09:38.323 Latency(us) 00:09:38.323 Device Information : IOPS MiB/s Average min max 00:09:38.323 PCIE (0000:00:10.0) NSID 1 from core 2: 4223.65 16.50 3786.41 786.11 19882.32 00:09:38.323 PCIE (0000:00:11.0) NSID 1 from core 2: 4223.65 16.50 3787.88 804.94 19520.73 00:09:38.323 PCIE (0000:00:13.0) NSID 1 from core 2: 4223.65 16.50 3787.85 821.60 19376.65 00:09:38.323 PCIE (0000:00:12.0) NSID 1 from core 2: 4223.65 16.50 3787.61 803.46 18858.68 00:09:38.323 PCIE (0000:00:12.0) NSID 2 from core 2: 4223.65 16.50 3787.55 826.42 19435.23 00:09:38.323 PCIE (0000:00:12.0) NSID 3 from core 2: 4223.65 16.50 3787.50 658.30 19611.48 00:09:38.323 ======================================================== 00:09:38.323 Total : 25341.89 98.99 3787.46 658.30 19882.32 00:09:38.323 00:09:38.323 17:43:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76852 00:09:38.323 17:43:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76854 00:09:38.323 00:09:38.323 real 0m10.575s 00:09:38.323 user 0m18.285s 00:09:38.323 sys 0m0.669s 00:09:38.323 17:43:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:38.323 17:43:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:38.323 ************************************ 00:09:38.323 END TEST nvme_multi_secondary 00:09:38.323 ************************************ 00:09:38.580 17:43:58 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:38.580 17:43:58 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:38.580 17:43:58 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/75819 ]] 00:09:38.580 17:43:58 nvme -- common/autotest_common.sh@1092 -- # kill 75819 00:09:38.581 17:43:58 nvme -- common/autotest_common.sh@1093 -- # wait 75819 00:09:38.581 [2024-11-05 17:43:58.331907] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.332531] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.332565] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.332577] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.333123] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.333147] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.333160] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.333170] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.334655] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.334822] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.334846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.334857] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.336026] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.336084] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.336100] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 [2024-11-05 17:43:58.336110] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76729) is not found. Dropping the request. 00:09:38.581 17:43:58 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:09:38.581 17:43:58 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:09:38.581 17:43:58 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:38.581 17:43:58 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:38.581 17:43:58 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:38.581 17:43:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.581 ************************************ 00:09:38.581 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:38.581 ************************************ 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:38.581 * Looking for test storage... 00:09:38.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:38.581 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:38.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.839 --rc genhtml_branch_coverage=1 00:09:38.839 --rc genhtml_function_coverage=1 00:09:38.839 --rc genhtml_legend=1 00:09:38.839 --rc geninfo_all_blocks=1 00:09:38.839 --rc geninfo_unexecuted_blocks=1 00:09:38.839 00:09:38.839 ' 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:38.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.839 --rc genhtml_branch_coverage=1 00:09:38.839 --rc genhtml_function_coverage=1 00:09:38.839 --rc genhtml_legend=1 00:09:38.839 --rc geninfo_all_blocks=1 00:09:38.839 --rc geninfo_unexecuted_blocks=1 00:09:38.839 00:09:38.839 ' 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:38.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.839 --rc genhtml_branch_coverage=1 00:09:38.839 --rc genhtml_function_coverage=1 00:09:38.839 --rc genhtml_legend=1 00:09:38.839 --rc geninfo_all_blocks=1 00:09:38.839 --rc geninfo_unexecuted_blocks=1 00:09:38.839 00:09:38.839 ' 00:09:38.839 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:38.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.839 --rc genhtml_branch_coverage=1 00:09:38.840 --rc genhtml_function_coverage=1 00:09:38.840 --rc genhtml_legend=1 00:09:38.840 --rc geninfo_all_blocks=1 00:09:38.840 --rc geninfo_unexecuted_blocks=1 00:09:38.840 00:09:38.840 ' 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:38.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77019 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77019 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 77019 ']' 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:38.840 17:43:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:38.840 [2024-11-05 17:43:58.720157] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:09:38.840 [2024-11-05 17:43:58.720408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77019 ] 00:09:39.098 [2024-11-05 17:43:58.860591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:39.098 [2024-11-05 17:43:58.885382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.098 [2024-11-05 17:43:58.904997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.098 [2024-11-05 17:43:58.905182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.098 [2024-11-05 17:43:58.905556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.098 [2024-11-05 17:43:58.905327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.663 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:39.663 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:09:39.663 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:39.664 nvme0n1 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_GeI9b.txt 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:39.664 true 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730828639 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77042 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:39.664 17:43:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:42.189 [2024-11-05 17:44:01.644971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:42.189 [2024-11-05 17:44:01.645382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:42.189 [2024-11-05 17:44:01.645415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:42.189 [2024-11-05 17:44:01.645427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.189 [2024-11-05 17:44:01.647198] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77042 00:09:42.189 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77042 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77042 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_GeI9b.txt 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_GeI9b.txt 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77019 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 77019 ']' 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 77019 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77019 00:09:42.189 killing process with pid 77019 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:42.189 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77019' 00:09:42.190 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 77019 00:09:42.190 17:44:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 77019 00:09:42.190 17:44:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:42.190 17:44:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:42.190 00:09:42.190 real 0m3.605s 00:09:42.190 user 0m12.785s 00:09:42.190 sys 0m0.496s 00:09:42.190 17:44:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.190 17:44:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:42.190 ************************************ 00:09:42.190 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:42.190 ************************************ 00:09:42.190 17:44:02 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:42.190 17:44:02 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:42.190 17:44:02 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:42.190 17:44:02 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:42.190 17:44:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.190 ************************************ 00:09:42.190 START TEST nvme_fio 00:09:42.190 ************************************ 00:09:42.190 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:09:42.190 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:42.190 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:42.190 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:42.190 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:42.190 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:42.190 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:42.190 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:42.190 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:42.190 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:42.190 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:42.190 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:42.190 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:42.190 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:42.190 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:42.190 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:42.469 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:42.469 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:42.746 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:42.746 17:44:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:42.746 17:44:02 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:43.004 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:43.004 fio-3.35 00:09:43.004 Starting 1 thread 00:09:46.290 00:09:46.290 test: (groupid=0, jobs=1): err= 0: pid=77166: Tue Nov 5 17:44:05 2024 00:09:46.290 read: IOPS=13.1k, BW=51.2MiB/s (53.7MB/s)(102MiB/2001msec) 00:09:46.290 slat (nsec): min=3940, max=88052, avg=5948.17, stdev=2622.30 00:09:46.290 clat (usec): min=191, max=11851, avg=3271.40, stdev=1325.28 00:09:46.290 lat (usec): min=196, max=11940, avg=3277.35, stdev=1325.88 00:09:46.290 clat percentiles (usec): 00:09:46.290 | 1.00th=[ 1237], 5.00th=[ 1647], 10.00th=[ 2008], 20.00th=[ 2540], 00:09:46.290 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 3032], 00:09:46.290 | 70.00th=[ 3392], 80.00th=[ 4146], 90.00th=[ 5276], 95.00th=[ 6128], 00:09:46.290 | 99.00th=[ 7570], 99.50th=[ 8225], 99.90th=[ 9503], 99.95th=[10421], 00:09:46.290 | 99.99th=[11863] 00:09:46.290 bw ( KiB/s): min=36200, max=83184, per=100.00%, avg=56474.67, stdev=24143.90, samples=3 00:09:46.290 iops : min= 9050, max=20796, avg=14118.67, stdev=6035.97, samples=3 00:09:46.290 write: IOPS=13.1k, BW=51.2MiB/s (53.7MB/s)(102MiB/2001msec); 0 zone resets 00:09:46.290 slat (nsec): min=4153, max=91476, avg=6369.98, stdev=2654.54 00:09:46.290 clat (usec): min=208, max=40244, avg=6469.22, stdev=7985.89 00:09:46.290 lat (usec): min=213, max=40249, avg=6475.59, stdev=7986.03 00:09:46.290 clat percentiles (usec): 00:09:46.290 | 1.00th=[ 1385], 5.00th=[ 1958], 10.00th=[ 2409], 20.00th=[ 2606], 00:09:46.290 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2966], 60.00th=[ 3392], 00:09:46.290 | 70.00th=[ 4359], 80.00th=[ 5932], 90.00th=[21890], 95.00th=[26870], 00:09:46.290 | 99.00th=[34341], 99.50th=[35914], 99.90th=[38011], 99.95th=[38536], 00:09:46.290 | 99.99th=[39584] 00:09:46.290 bw ( KiB/s): min=36224, max=83224, per=100.00%, avg=56306.67, stdev=24233.95, samples=3 00:09:46.290 iops : min= 9056, max=20806, avg=14076.67, stdev=6058.49, samples=3 00:09:46.290 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.03% 00:09:46.290 lat (msec) : 2=7.64%, 4=65.01%, 10=19.85%, 20=1.60%, 50=5.82% 00:09:46.290 cpu : usr=99.25%, sys=0.00%, ctx=5, majf=0, minf=626 00:09:46.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.290 issued rwts: total=26211,26220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.290 00:09:46.290 Run status group 0 (all jobs): 00:09:46.290 READ: bw=51.2MiB/s (53.7MB/s), 51.2MiB/s-51.2MiB/s (53.7MB/s-53.7MB/s), io=102MiB (107MB), run=2001-2001msec 00:09:46.290 WRITE: bw=51.2MiB/s (53.7MB/s), 51.2MiB/s-51.2MiB/s (53.7MB/s-53.7MB/s), io=102MiB (107MB), run=2001-2001msec 00:09:46.290 ----------------------------------------------------- 00:09:46.290 Suppressions used: 00:09:46.290 count bytes template 00:09:46.290 1 32 /usr/src/fio/parse.c 00:09:46.290 1 8 libtcmalloc_minimal.so 00:09:46.290 ----------------------------------------------------- 00:09:46.290 00:09:46.290 17:44:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:46.290 17:44:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:46.290 17:44:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:46.290 17:44:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:46.290 17:44:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:46.290 17:44:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:46.551 17:44:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:46.551 17:44:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:46.551 17:44:06 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:46.551 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:46.551 fio-3.35 00:09:46.551 Starting 1 thread 00:09:50.744 00:09:50.744 test: (groupid=0, jobs=1): err= 0: pid=77221: Tue Nov 5 17:44:10 2024 00:09:50.744 read: IOPS=15.5k, BW=60.7MiB/s (63.6MB/s)(121MiB/2001msec) 00:09:50.744 slat (nsec): min=3400, max=61115, avg=5186.43, stdev=2599.25 00:09:50.744 clat (usec): min=199, max=14189, avg=3226.26, stdev=1296.60 00:09:50.744 lat (usec): min=202, max=14251, avg=3231.45, stdev=1297.56 00:09:50.744 clat percentiles (usec): 00:09:50.744 | 1.00th=[ 1254], 5.00th=[ 2057], 10.00th=[ 2343], 20.00th=[ 2507], 00:09:50.744 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2835], 00:09:50.744 | 70.00th=[ 3163], 80.00th=[ 3884], 90.00th=[ 5014], 95.00th=[ 5997], 00:09:50.744 | 99.00th=[ 7832], 99.50th=[ 8717], 99.90th=[11600], 99.95th=[12649], 00:09:50.744 | 99.99th=[14091] 00:09:50.744 bw ( KiB/s): min=39896, max=84448, per=89.21%, avg=55413.33, stdev=25164.64, samples=3 00:09:50.744 iops : min= 9974, max=21112, avg=13853.33, stdev=6291.16, samples=3 00:09:50.744 write: IOPS=15.5k, BW=60.7MiB/s (63.6MB/s)(121MiB/2001msec); 0 zone resets 00:09:50.744 slat (nsec): min=3464, max=71716, avg=5482.30, stdev=2741.08 00:09:50.744 clat (usec): min=192, max=40622, avg=4988.36, stdev=6434.31 00:09:50.744 lat (usec): min=196, max=40626, avg=4993.84, stdev=6434.67 00:09:50.744 clat percentiles (usec): 00:09:50.745 | 1.00th=[ 1565], 5.00th=[ 2245], 10.00th=[ 2442], 20.00th=[ 2540], 00:09:50.745 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2900], 00:09:50.745 | 70.00th=[ 3425], 80.00th=[ 4424], 90.00th=[ 6915], 95.00th=[23200], 00:09:50.745 | 99.00th=[33424], 99.50th=[35390], 99.90th=[38536], 99.95th=[39060], 00:09:50.745 | 99.99th=[39584] 00:09:50.745 bw ( KiB/s): min=39280, max=84352, per=89.25%, avg=55466.67, stdev=25076.15, samples=3 00:09:50.745 iops : min= 9820, max=21088, avg=13866.67, stdev=6269.04, samples=3 00:09:50.745 lat (usec) : 250=0.01%, 500=0.02%, 750=0.03%, 1000=0.22% 00:09:50.745 lat (msec) : 2=3.36%, 4=75.12%, 10=17.01%, 20=1.05%, 50=3.19% 00:09:50.745 cpu : usr=99.30%, sys=0.05%, ctx=3, majf=0, minf=625 00:09:50.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:50.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.745 issued rwts: total=31072,31090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.745 00:09:50.745 Run status group 0 (all jobs): 00:09:50.745 READ: bw=60.7MiB/s (63.6MB/s), 60.7MiB/s-60.7MiB/s (63.6MB/s-63.6MB/s), io=121MiB (127MB), run=2001-2001msec 00:09:50.745 WRITE: bw=60.7MiB/s (63.6MB/s), 60.7MiB/s-60.7MiB/s (63.6MB/s-63.6MB/s), io=121MiB (127MB), run=2001-2001msec 00:09:50.745 ----------------------------------------------------- 00:09:50.745 Suppressions used: 00:09:50.745 count bytes template 00:09:50.745 1 32 /usr/src/fio/parse.c 00:09:50.745 1 8 libtcmalloc_minimal.so 00:09:50.745 ----------------------------------------------------- 00:09:50.745 00:09:50.745 17:44:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:50.745 17:44:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:50.745 17:44:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:50.745 17:44:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:50.745 17:44:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:50.745 17:44:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:51.003 17:44:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:51.003 17:44:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:51.003 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:51.004 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:51.004 17:44:10 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:51.262 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:51.262 fio-3.35 00:09:51.262 Starting 1 thread 00:09:57.828 00:09:57.828 test: (groupid=0, jobs=1): err= 0: pid=77277: Tue Nov 5 17:44:17 2024 00:09:57.828 read: IOPS=19.7k, BW=77.1MiB/s (80.9MB/s)(154MiB/2001msec) 00:09:57.828 slat (nsec): min=3603, max=78941, avg=5790.41, stdev=2507.64 00:09:57.828 clat (usec): min=505, max=11170, avg=3222.38, stdev=1010.97 00:09:57.828 lat (usec): min=509, max=11241, avg=3228.17, stdev=1012.25 00:09:57.828 clat percentiles (usec): 00:09:57.828 | 1.00th=[ 2311], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2606], 00:09:57.828 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2933], 00:09:57.828 | 70.00th=[ 3130], 80.00th=[ 3654], 90.00th=[ 4752], 95.00th=[ 5538], 00:09:57.828 | 99.00th=[ 6849], 99.50th=[ 7570], 99.90th=[ 8225], 99.95th=[ 8455], 00:09:57.828 | 99.99th=[10814] 00:09:57.828 bw ( KiB/s): min=76880, max=85312, per=100.00%, avg=80626.67, stdev=4293.66, samples=3 00:09:57.828 iops : min=19220, max=21328, avg=20156.67, stdev=1073.41, samples=3 00:09:57.828 write: IOPS=19.7k, BW=77.0MiB/s (80.7MB/s)(154MiB/2001msec); 0 zone resets 00:09:57.828 slat (usec): min=3, max=357, avg= 6.11, stdev= 3.12 00:09:57.828 clat (usec): min=480, max=10818, avg=3244.11, stdev=1025.52 00:09:57.828 lat (usec): min=485, max=10831, avg=3250.22, stdev=1026.81 00:09:57.828 clat percentiles (usec): 00:09:57.828 | 1.00th=[ 2343], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2606], 00:09:57.828 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2933], 00:09:57.828 | 70.00th=[ 3130], 80.00th=[ 3654], 90.00th=[ 4817], 95.00th=[ 5604], 00:09:57.828 | 99.00th=[ 6915], 99.50th=[ 7635], 99.90th=[ 8291], 99.95th=[ 8455], 00:09:57.828 | 99.99th=[10421] 00:09:57.828 bw ( KiB/s): min=76960, max=85656, per=100.00%, avg=80722.67, stdev=4464.63, samples=3 00:09:57.828 iops : min=19240, max=21414, avg=20180.67, stdev=1116.16, samples=3 00:09:57.828 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:09:57.828 lat (msec) : 2=0.24%, 4=83.97%, 10=15.73%, 20=0.02% 00:09:57.828 cpu : usr=98.40%, sys=0.45%, ctx=40, majf=0, minf=626 00:09:57.828 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:57.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.828 issued rwts: total=39515,39440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.828 00:09:57.828 Run status group 0 (all jobs): 00:09:57.828 READ: bw=77.1MiB/s (80.9MB/s), 77.1MiB/s-77.1MiB/s (80.9MB/s-80.9MB/s), io=154MiB (162MB), run=2001-2001msec 00:09:57.828 WRITE: bw=77.0MiB/s (80.7MB/s), 77.0MiB/s-77.0MiB/s (80.7MB/s-80.7MB/s), io=154MiB (162MB), run=2001-2001msec 00:09:57.828 ----------------------------------------------------- 00:09:57.828 Suppressions used: 00:09:57.828 count bytes template 00:09:57.828 1 32 /usr/src/fio/parse.c 00:09:57.828 1 8 libtcmalloc_minimal.so 00:09:57.828 ----------------------------------------------------- 00:09:57.828 00:09:57.828 17:44:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:57.828 17:44:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:57.828 17:44:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:57.828 17:44:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:57.828 17:44:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:57.828 17:44:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:57.828 17:44:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:57.828 17:44:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:57.828 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:57.829 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:57.829 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:57.829 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:57.829 17:44:17 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:58.086 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:58.086 fio-3.35 00:09:58.087 Starting 1 thread 00:10:04.654 00:10:04.654 test: (groupid=0, jobs=1): err= 0: pid=77332: Tue Nov 5 17:44:23 2024 00:10:04.654 read: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(177MiB/2001msec) 00:10:04.654 slat (usec): min=4, max=267, avg= 5.05, stdev= 2.73 00:10:04.654 clat (usec): min=272, max=8321, avg=2828.11, stdev=761.79 00:10:04.654 lat (usec): min=276, max=8356, avg=2833.16, stdev=762.91 00:10:04.654 clat percentiles (usec): 00:10:04.654 | 1.00th=[ 2147], 5.00th=[ 2311], 10.00th=[ 2343], 20.00th=[ 2442], 00:10:04.654 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2671], 00:10:04.654 | 70.00th=[ 2737], 80.00th=[ 2900], 90.00th=[ 3556], 95.00th=[ 4752], 00:10:04.654 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6652], 99.95th=[ 6783], 00:10:04.654 | 99.99th=[ 8160] 00:10:04.654 bw ( KiB/s): min=83744, max=98744, per=99.12%, avg=89656.00, stdev=7988.44, samples=3 00:10:04.654 iops : min=20936, max=24686, avg=22414.00, stdev=1997.11, samples=3 00:10:04.654 write: IOPS=22.5k, BW=87.9MiB/s (92.1MB/s)(176MiB/2001msec); 0 zone resets 00:10:04.654 slat (usec): min=4, max=313, avg= 5.29, stdev= 2.97 00:10:04.654 clat (usec): min=200, max=8254, avg=2828.99, stdev=765.72 00:10:04.654 lat (usec): min=205, max=8265, avg=2834.28, stdev=766.88 00:10:04.654 clat percentiles (usec): 00:10:04.654 | 1.00th=[ 2147], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2442], 00:10:04.654 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2671], 00:10:04.654 | 70.00th=[ 2737], 80.00th=[ 2868], 90.00th=[ 3523], 95.00th=[ 4752], 00:10:04.654 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6652], 99.95th=[ 6783], 00:10:04.654 | 99.99th=[ 8029] 00:10:04.654 bw ( KiB/s): min=83744, max=99184, per=99.81%, avg=89792.00, stdev=8245.31, samples=3 00:10:04.654 iops : min=20936, max=24796, avg=22448.00, stdev=2061.33, samples=3 00:10:04.654 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:10:04.654 lat (msec) : 2=0.43%, 4=91.83%, 10=7.70% 00:10:04.654 cpu : usr=98.75%, sys=0.30%, ctx=4, majf=0, minf=625 00:10:04.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:04.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.654 issued rwts: total=45250,45005,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.654 00:10:04.654 Run status group 0 (all jobs): 00:10:04.654 READ: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=177MiB (185MB), run=2001-2001msec 00:10:04.654 WRITE: bw=87.9MiB/s (92.1MB/s), 87.9MiB/s-87.9MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:10:04.654 ----------------------------------------------------- 00:10:04.654 Suppressions used: 00:10:04.654 count bytes template 00:10:04.654 1 32 /usr/src/fio/parse.c 00:10:04.654 1 8 libtcmalloc_minimal.so 00:10:04.654 ----------------------------------------------------- 00:10:04.654 00:10:04.654 17:44:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:04.654 17:44:23 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:04.654 00:10:04.654 real 0m21.901s 00:10:04.654 user 0m16.768s 00:10:04.654 sys 0m6.606s 00:10:04.654 ************************************ 00:10:04.654 END TEST nvme_fio 00:10:04.654 ************************************ 00:10:04.654 17:44:23 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.654 17:44:23 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:04.654 ************************************ 00:10:04.654 END TEST nvme 00:10:04.654 ************************************ 00:10:04.654 00:10:04.654 real 1m29.485s 00:10:04.654 user 3m32.523s 00:10:04.654 sys 0m16.835s 00:10:04.654 17:44:24 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.654 17:44:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.654 17:44:24 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:04.654 17:44:24 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:04.654 17:44:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.654 17:44:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.655 17:44:24 -- common/autotest_common.sh@10 -- # set +x 00:10:04.655 ************************************ 00:10:04.655 START TEST nvme_scc 00:10:04.655 ************************************ 00:10:04.655 17:44:24 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:04.655 * Looking for test storage... 00:10:04.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:04.655 17:44:24 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.655 17:44:24 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.655 17:44:24 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.655 17:44:24 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:04.655 17:44:24 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.655 17:44:24 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:04.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.655 --rc genhtml_branch_coverage=1 00:10:04.655 --rc genhtml_function_coverage=1 00:10:04.655 --rc genhtml_legend=1 00:10:04.655 --rc geninfo_all_blocks=1 00:10:04.655 --rc geninfo_unexecuted_blocks=1 00:10:04.655 00:10:04.655 ' 00:10:04.655 17:44:24 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:04.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.655 --rc genhtml_branch_coverage=1 00:10:04.655 --rc genhtml_function_coverage=1 00:10:04.655 --rc genhtml_legend=1 00:10:04.655 --rc geninfo_all_blocks=1 00:10:04.655 --rc geninfo_unexecuted_blocks=1 00:10:04.655 00:10:04.655 ' 00:10:04.655 17:44:24 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:04.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.655 --rc genhtml_branch_coverage=1 00:10:04.655 --rc genhtml_function_coverage=1 00:10:04.655 --rc genhtml_legend=1 00:10:04.655 --rc geninfo_all_blocks=1 00:10:04.655 --rc geninfo_unexecuted_blocks=1 00:10:04.655 00:10:04.655 ' 00:10:04.655 17:44:24 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:04.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.655 --rc genhtml_branch_coverage=1 00:10:04.655 --rc genhtml_function_coverage=1 00:10:04.655 --rc genhtml_legend=1 00:10:04.655 --rc geninfo_all_blocks=1 00:10:04.655 --rc geninfo_unexecuted_blocks=1 00:10:04.655 00:10:04.655 ' 00:10:04.655 17:44:24 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.655 17:44:24 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.655 17:44:24 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.655 17:44:24 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.655 17:44:24 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.655 17:44:24 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:04.655 17:44:24 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:04.655 17:44:24 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:04.655 17:44:24 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.655 17:44:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:04.655 17:44:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:04.655 17:44:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:04.655 17:44:24 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:04.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:04.917 Waiting for block devices as requested 00:10:04.918 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:04.918 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:04.918 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.179 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:10.462 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:10.462 17:44:30 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:10.462 17:44:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:10.462 17:44:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:10.462 17:44:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.462 17:44:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.462 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.463 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:10.464 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:10.465 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:10.466 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:10.467 17:44:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:10.467 17:44:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:10.467 17:44:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.467 17:44:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.467 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.468 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.469 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.470 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.471 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:10.472 17:44:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:10.472 17:44:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:10.472 17:44:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.472 17:44:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.472 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.473 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:10.474 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.475 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.476 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.477 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:10.478 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:10.479 17:44:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:10.479 17:44:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:10.479 17:44:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.479 17:44:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.479 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.480 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.481 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:10.482 17:44:30 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:10.482 17:44:30 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:10.483 17:44:30 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:10.483 17:44:30 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:10.483 17:44:30 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:10.483 17:44:30 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:11.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.307 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:11.307 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:11.565 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:11.565 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:11.565 17:44:31 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:11.565 17:44:31 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:11.565 17:44:31 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.565 17:44:31 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:11.565 ************************************ 00:10:11.565 START TEST nvme_simple_copy 00:10:11.565 ************************************ 00:10:11.565 17:44:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:11.823 Initializing NVMe Controllers 00:10:11.823 Attaching to 0000:00:10.0 00:10:11.823 Controller supports SCC. Attached to 0000:00:10.0 00:10:11.823 Namespace ID: 1 size: 6GB 00:10:11.823 Initialization complete. 00:10:11.823 00:10:11.823 Controller QEMU NVMe Ctrl (12340 ) 00:10:11.823 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:11.823 Namespace Block Size:4096 00:10:11.823 Writing LBAs 0 to 63 with Random Data 00:10:11.823 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:11.823 LBAs matching Written Data: 64 00:10:11.823 ************************************ 00:10:11.823 00:10:11.823 real 0m0.254s 00:10:11.823 user 0m0.083s 00:10:11.823 sys 0m0.069s 00:10:11.823 17:44:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:11.823 17:44:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:11.823 END TEST nvme_simple_copy 00:10:11.823 ************************************ 00:10:11.823 ************************************ 00:10:11.823 END TEST nvme_scc 00:10:11.823 ************************************ 00:10:11.823 00:10:11.823 real 0m7.637s 00:10:11.823 user 0m0.995s 00:10:11.823 sys 0m1.472s 00:10:11.823 17:44:31 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:11.823 17:44:31 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:11.823 17:44:31 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:11.823 17:44:31 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:11.823 17:44:31 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:11.823 17:44:31 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:11.823 17:44:31 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:11.823 17:44:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:11.823 17:44:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.823 17:44:31 -- common/autotest_common.sh@10 -- # set +x 00:10:11.823 ************************************ 00:10:11.823 START TEST nvme_fdp 00:10:11.823 ************************************ 00:10:11.823 17:44:31 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:10:12.081 * Looking for test storage... 00:10:12.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:12.081 17:44:31 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:12.081 17:44:31 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:10:12.081 17:44:31 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:12.081 17:44:31 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.081 17:44:31 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:12.081 17:44:31 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.081 17:44:31 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:12.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.081 --rc genhtml_branch_coverage=1 00:10:12.081 --rc genhtml_function_coverage=1 00:10:12.081 --rc genhtml_legend=1 00:10:12.081 --rc geninfo_all_blocks=1 00:10:12.081 --rc geninfo_unexecuted_blocks=1 00:10:12.081 00:10:12.081 ' 00:10:12.081 17:44:31 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:12.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.081 --rc genhtml_branch_coverage=1 00:10:12.081 --rc genhtml_function_coverage=1 00:10:12.081 --rc genhtml_legend=1 00:10:12.081 --rc geninfo_all_blocks=1 00:10:12.081 --rc geninfo_unexecuted_blocks=1 00:10:12.081 00:10:12.081 ' 00:10:12.081 17:44:31 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:12.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.082 --rc genhtml_branch_coverage=1 00:10:12.082 --rc genhtml_function_coverage=1 00:10:12.082 --rc genhtml_legend=1 00:10:12.082 --rc geninfo_all_blocks=1 00:10:12.082 --rc geninfo_unexecuted_blocks=1 00:10:12.082 00:10:12.082 ' 00:10:12.082 17:44:31 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:12.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.082 --rc genhtml_branch_coverage=1 00:10:12.082 --rc genhtml_function_coverage=1 00:10:12.082 --rc genhtml_legend=1 00:10:12.082 --rc geninfo_all_blocks=1 00:10:12.082 --rc geninfo_unexecuted_blocks=1 00:10:12.082 00:10:12.082 ' 00:10:12.082 17:44:31 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.082 17:44:31 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.082 17:44:31 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.082 17:44:31 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.082 17:44:31 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.082 17:44:31 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.082 17:44:31 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.082 17:44:31 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.082 17:44:31 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:12.082 17:44:31 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:12.082 17:44:31 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:12.082 17:44:31 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:12.082 17:44:31 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:12.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:12.598 Waiting for block devices as requested 00:10:12.598 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:12.598 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:12.598 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:12.598 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:17.871 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:17.871 17:44:37 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:17.871 17:44:37 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:17.871 17:44:37 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:17.871 17:44:37 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:17.871 17:44:37 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.871 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.872 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.873 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:17.874 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.875 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:17.876 17:44:37 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:17.876 17:44:37 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:17.876 17:44:37 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:17.876 17:44:37 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.876 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:17.877 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.878 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.879 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:17.880 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:17.881 17:44:37 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:17.881 17:44:37 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:17.881 17:44:37 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:17.881 17:44:37 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:17.881 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.882 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:17.883 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.884 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:17.885 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:17.886 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:17.887 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:17.888 17:44:37 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:17.888 17:44:37 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:17.888 17:44:37 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:17.888 17:44:37 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.888 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:17.889 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.890 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.149 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:18.150 17:44:37 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:18.150 17:44:37 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:18.151 17:44:37 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:18.151 17:44:37 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:18.151 17:44:37 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:18.151 17:44:37 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:18.408 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:18.974 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.974 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.974 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.974 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.974 17:44:38 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:18.974 17:44:38 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:18.974 17:44:38 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:18.974 17:44:38 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:18.974 ************************************ 00:10:18.974 START TEST nvme_flexible_data_placement 00:10:18.974 ************************************ 00:10:18.974 17:44:38 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:19.233 Initializing NVMe Controllers 00:10:19.233 Attaching to 0000:00:13.0 00:10:19.233 Controller supports FDP Attached to 0000:00:13.0 00:10:19.233 Namespace ID: 1 Endurance Group ID: 1 00:10:19.233 Initialization complete. 00:10:19.233 00:10:19.233 ================================== 00:10:19.233 == FDP tests for Namespace: #01 == 00:10:19.233 ================================== 00:10:19.233 00:10:19.233 Get Feature: FDP: 00:10:19.233 ================= 00:10:19.233 Enabled: Yes 00:10:19.233 FDP configuration Index: 0 00:10:19.233 00:10:19.233 FDP configurations log page 00:10:19.233 =========================== 00:10:19.233 Number of FDP configurations: 1 00:10:19.233 Version: 0 00:10:19.233 Size: 112 00:10:19.233 FDP Configuration Descriptor: 0 00:10:19.233 Descriptor Size: 96 00:10:19.233 Reclaim Group Identifier format: 2 00:10:19.233 FDP Volatile Write Cache: Not Present 00:10:19.233 FDP Configuration: Valid 00:10:19.233 Vendor Specific Size: 0 00:10:19.233 Number of Reclaim Groups: 2 00:10:19.233 Number of Recalim Unit Handles: 8 00:10:19.233 Max Placement Identifiers: 128 00:10:19.233 Number of Namespaces Suppprted: 256 00:10:19.233 Reclaim unit Nominal Size: 6000000 bytes 00:10:19.233 Estimated Reclaim Unit Time Limit: Not Reported 00:10:19.233 RUH Desc #000: RUH Type: Initially Isolated 00:10:19.233 RUH Desc #001: RUH Type: Initially Isolated 00:10:19.233 RUH Desc #002: RUH Type: Initially Isolated 00:10:19.233 RUH Desc #003: RUH Type: Initially Isolated 00:10:19.233 RUH Desc #004: RUH Type: Initially Isolated 00:10:19.233 RUH Desc #005: RUH Type: Initially Isolated 00:10:19.233 RUH Desc #006: RUH Type: Initially Isolated 00:10:19.233 RUH Desc #007: RUH Type: Initially Isolated 00:10:19.233 00:10:19.233 FDP reclaim unit handle usage log page 00:10:19.233 ====================================== 00:10:19.233 Number of Reclaim Unit Handles: 8 00:10:19.233 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:19.233 RUH Usage Desc #001: RUH Attributes: Unused 00:10:19.233 RUH Usage Desc #002: RUH Attributes: Unused 00:10:19.233 RUH Usage Desc #003: RUH Attributes: Unused 00:10:19.233 RUH Usage Desc #004: RUH Attributes: Unused 00:10:19.233 RUH Usage Desc #005: RUH Attributes: Unused 00:10:19.233 RUH Usage Desc #006: RUH Attributes: Unused 00:10:19.233 RUH Usage Desc #007: RUH Attributes: Unused 00:10:19.233 00:10:19.233 FDP statistics log page 00:10:19.233 ======================= 00:10:19.233 Host bytes with metadata written: 1885569024 00:10:19.233 Media bytes with metadata written: 1886490624 00:10:19.233 Media bytes erased: 0 00:10:19.233 00:10:19.233 FDP Reclaim unit handle status 00:10:19.233 ============================== 00:10:19.233 Number of RUHS descriptors: 2 00:10:19.233 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000019c8 00:10:19.233 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:19.233 00:10:19.233 FDP write on placement id: 0 success 00:10:19.233 00:10:19.233 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:19.233 00:10:19.233 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:19.233 00:10:19.233 Get Feature: FDP Events for Placement handle: #0 00:10:19.233 ======================== 00:10:19.233 Number of FDP Events: 6 00:10:19.233 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:19.233 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:19.233 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:19.233 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:19.233 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:19.233 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:19.233 00:10:19.233 FDP events log page 00:10:19.233 =================== 00:10:19.233 Number of FDP events: 1 00:10:19.233 FDP Event #0: 00:10:19.233 Event Type: RU Not Written to Capacity 00:10:19.233 Placement Identifier: Valid 00:10:19.233 NSID: Valid 00:10:19.233 Location: Valid 00:10:19.233 Placement Identifier: 0 00:10:19.233 Event Timestamp: 2 00:10:19.233 Namespace Identifier: 1 00:10:19.233 Reclaim Group Identifier: 0 00:10:19.233 Reclaim Unit Handle Identifier: 0 00:10:19.233 00:10:19.233 FDP test passed 00:10:19.233 00:10:19.233 real 0m0.221s 00:10:19.233 user 0m0.062s 00:10:19.233 sys 0m0.058s 00:10:19.233 ************************************ 00:10:19.233 END TEST nvme_flexible_data_placement 00:10:19.233 ************************************ 00:10:19.233 17:44:39 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:19.233 17:44:39 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:19.233 ************************************ 00:10:19.233 END TEST nvme_fdp 00:10:19.233 ************************************ 00:10:19.233 00:10:19.233 real 0m7.427s 00:10:19.233 user 0m0.964s 00:10:19.233 sys 0m1.361s 00:10:19.233 17:44:39 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:19.233 17:44:39 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:19.492 17:44:39 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:19.492 17:44:39 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:19.492 17:44:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:19.492 17:44:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:19.492 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:10:19.492 ************************************ 00:10:19.492 START TEST nvme_rpc 00:10:19.492 ************************************ 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:19.492 * Looking for test storage... 00:10:19.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.492 17:44:39 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:19.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.492 --rc genhtml_branch_coverage=1 00:10:19.492 --rc genhtml_function_coverage=1 00:10:19.492 --rc genhtml_legend=1 00:10:19.492 --rc geninfo_all_blocks=1 00:10:19.492 --rc geninfo_unexecuted_blocks=1 00:10:19.492 00:10:19.492 ' 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:19.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.492 --rc genhtml_branch_coverage=1 00:10:19.492 --rc genhtml_function_coverage=1 00:10:19.492 --rc genhtml_legend=1 00:10:19.492 --rc geninfo_all_blocks=1 00:10:19.492 --rc geninfo_unexecuted_blocks=1 00:10:19.492 00:10:19.492 ' 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:19.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.492 --rc genhtml_branch_coverage=1 00:10:19.492 --rc genhtml_function_coverage=1 00:10:19.492 --rc genhtml_legend=1 00:10:19.492 --rc geninfo_all_blocks=1 00:10:19.492 --rc geninfo_unexecuted_blocks=1 00:10:19.492 00:10:19.492 ' 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:19.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.492 --rc genhtml_branch_coverage=1 00:10:19.492 --rc genhtml_function_coverage=1 00:10:19.492 --rc genhtml_legend=1 00:10:19.492 --rc geninfo_all_blocks=1 00:10:19.492 --rc geninfo_unexecuted_blocks=1 00:10:19.492 00:10:19.492 ' 00:10:19.492 17:44:39 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.492 17:44:39 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:19.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.492 17:44:39 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:19.492 17:44:39 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78689 00:10:19.492 17:44:39 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:19.492 17:44:39 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:19.492 17:44:39 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78689 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 78689 ']' 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:19.492 17:44:39 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.770 [2024-11-05 17:44:39.516140] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:10:19.770 [2024-11-05 17:44:39.516409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78689 ] 00:10:19.770 [2024-11-05 17:44:39.646294] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:19.770 [2024-11-05 17:44:39.677578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:19.770 [2024-11-05 17:44:39.696909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.770 [2024-11-05 17:44:39.696945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.704 17:44:40 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:20.704 17:44:40 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:20.704 17:44:40 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:20.704 Nvme0n1 00:10:20.704 17:44:40 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:20.704 17:44:40 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:20.962 request: 00:10:20.962 { 00:10:20.962 "bdev_name": "Nvme0n1", 00:10:20.962 "filename": "non_existing_file", 00:10:20.962 "method": "bdev_nvme_apply_firmware", 00:10:20.962 "req_id": 1 00:10:20.962 } 00:10:20.962 Got JSON-RPC error response 00:10:20.962 response: 00:10:20.962 { 00:10:20.962 "code": -32603, 00:10:20.962 "message": "open file failed." 00:10:20.962 } 00:10:20.962 17:44:40 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:20.962 17:44:40 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:20.962 17:44:40 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:21.221 17:44:41 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:21.221 17:44:41 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78689 00:10:21.221 17:44:41 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 78689 ']' 00:10:21.221 17:44:41 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 78689 00:10:21.221 17:44:41 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:10:21.221 17:44:41 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:21.221 17:44:41 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78689 00:10:21.221 killing process with pid 78689 00:10:21.221 17:44:41 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:21.221 17:44:41 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:21.221 17:44:41 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78689' 00:10:21.221 17:44:41 nvme_rpc -- common/autotest_common.sh@971 -- # kill 78689 00:10:21.221 17:44:41 nvme_rpc -- common/autotest_common.sh@976 -- # wait 78689 00:10:21.480 ************************************ 00:10:21.480 END TEST nvme_rpc 00:10:21.480 ************************************ 00:10:21.480 00:10:21.480 real 0m2.046s 00:10:21.480 user 0m4.019s 00:10:21.480 sys 0m0.456s 00:10:21.480 17:44:41 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.480 17:44:41 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.480 17:44:41 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:21.480 17:44:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:21.480 17:44:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.480 17:44:41 -- common/autotest_common.sh@10 -- # set +x 00:10:21.480 ************************************ 00:10:21.480 START TEST nvme_rpc_timeouts 00:10:21.480 ************************************ 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:21.480 * Looking for test storage... 00:10:21.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.480 17:44:41 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:21.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.480 --rc genhtml_branch_coverage=1 00:10:21.480 --rc genhtml_function_coverage=1 00:10:21.480 --rc genhtml_legend=1 00:10:21.480 --rc geninfo_all_blocks=1 00:10:21.480 --rc geninfo_unexecuted_blocks=1 00:10:21.480 00:10:21.480 ' 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:21.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.480 --rc genhtml_branch_coverage=1 00:10:21.480 --rc genhtml_function_coverage=1 00:10:21.480 --rc genhtml_legend=1 00:10:21.480 --rc geninfo_all_blocks=1 00:10:21.480 --rc geninfo_unexecuted_blocks=1 00:10:21.480 00:10:21.480 ' 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:21.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.480 --rc genhtml_branch_coverage=1 00:10:21.480 --rc genhtml_function_coverage=1 00:10:21.480 --rc genhtml_legend=1 00:10:21.480 --rc geninfo_all_blocks=1 00:10:21.480 --rc geninfo_unexecuted_blocks=1 00:10:21.480 00:10:21.480 ' 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:21.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.480 --rc genhtml_branch_coverage=1 00:10:21.480 --rc genhtml_function_coverage=1 00:10:21.480 --rc genhtml_legend=1 00:10:21.480 --rc geninfo_all_blocks=1 00:10:21.480 --rc geninfo_unexecuted_blocks=1 00:10:21.480 00:10:21.480 ' 00:10:21.480 17:44:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.480 17:44:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78743 00:10:21.480 17:44:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78743 00:10:21.480 17:44:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78775 00:10:21.480 17:44:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:21.480 17:44:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78775 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 78775 ']' 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.480 17:44:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:21.480 17:44:41 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:21.739 [2024-11-05 17:44:41.545451] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:10:21.739 [2024-11-05 17:44:41.546095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78775 ] 00:10:21.739 [2024-11-05 17:44:41.677493] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:21.739 [2024-11-05 17:44:41.706661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:21.739 [2024-11-05 17:44:41.725535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.739 [2024-11-05 17:44:41.725615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.673 Checking default timeout settings: 00:10:22.673 17:44:42 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:22.673 17:44:42 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:10:22.673 17:44:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:22.673 17:44:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:22.931 Making settings changes with rpc: 00:10:22.931 17:44:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:22.931 17:44:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:22.931 Check default vs. modified settings: 00:10:22.931 17:44:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:22.931 17:44:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78743 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78743 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:23.497 Setting action_on_timeout is changed as expected. 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78743 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78743 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:23.497 Setting timeout_us is changed as expected. 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78743 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78743 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:23.497 Setting timeout_admin_us is changed as expected. 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78743 /tmp/settings_modified_78743 00:10:23.497 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78775 00:10:23.497 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 78775 ']' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 78775 00:10:23.497 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:10:23.497 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78775 00:10:23.497 killing process with pid 78775 00:10:23.497 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:23.497 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78775' 00:10:23.497 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 78775 00:10:23.497 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 78775 00:10:23.755 RPC TIMEOUT SETTING TEST PASSED. 00:10:23.755 17:44:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:23.755 00:10:23.755 real 0m2.187s 00:10:23.755 user 0m4.413s 00:10:23.755 sys 0m0.442s 00:10:23.755 ************************************ 00:10:23.755 END TEST nvme_rpc_timeouts 00:10:23.755 ************************************ 00:10:23.755 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.755 17:44:43 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:23.755 17:44:43 -- spdk/autotest.sh@239 -- # uname -s 00:10:23.755 17:44:43 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:23.755 17:44:43 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:23.755 17:44:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:23.755 17:44:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:23.755 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:10:23.755 ************************************ 00:10:23.755 START TEST sw_hotplug 00:10:23.755 ************************************ 00:10:23.755 17:44:43 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:23.755 * Looking for test storage... 00:10:23.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:23.755 17:44:43 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:23.755 17:44:43 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:23.755 17:44:43 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:10:23.755 17:44:43 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.755 17:44:43 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:23.755 17:44:43 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.755 17:44:43 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.755 --rc genhtml_branch_coverage=1 00:10:23.755 --rc genhtml_function_coverage=1 00:10:23.755 --rc genhtml_legend=1 00:10:23.755 --rc geninfo_all_blocks=1 00:10:23.755 --rc geninfo_unexecuted_blocks=1 00:10:23.755 00:10:23.755 ' 00:10:23.755 17:44:43 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.755 --rc genhtml_branch_coverage=1 00:10:23.755 --rc genhtml_function_coverage=1 00:10:23.755 --rc genhtml_legend=1 00:10:23.755 --rc geninfo_all_blocks=1 00:10:23.755 --rc geninfo_unexecuted_blocks=1 00:10:23.755 00:10:23.755 ' 00:10:23.755 17:44:43 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.755 --rc genhtml_branch_coverage=1 00:10:23.755 --rc genhtml_function_coverage=1 00:10:23.755 --rc genhtml_legend=1 00:10:23.755 --rc geninfo_all_blocks=1 00:10:23.755 --rc geninfo_unexecuted_blocks=1 00:10:23.755 00:10:23.755 ' 00:10:23.755 17:44:43 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.755 --rc genhtml_branch_coverage=1 00:10:23.755 --rc genhtml_function_coverage=1 00:10:23.755 --rc genhtml_legend=1 00:10:23.755 --rc geninfo_all_blocks=1 00:10:23.755 --rc geninfo_unexecuted_blocks=1 00:10:23.755 00:10:23.755 ' 00:10:23.755 17:44:43 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:24.013 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.278 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:24.278 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:24.278 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:24.278 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:24.278 17:44:44 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:24.278 17:44:44 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:24.278 17:44:44 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:24.278 17:44:44 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:24.278 17:44:44 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:24.278 17:44:44 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:24.279 17:44:44 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:24.279 17:44:44 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:24.279 17:44:44 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:24.279 17:44:44 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:24.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.823 Waiting for block devices as requested 00:10:24.823 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.823 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.823 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:25.080 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:30.378 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:30.378 17:44:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:30.378 17:44:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:30.378 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:30.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:30.378 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:30.637 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:30.895 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.895 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.895 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:30.895 17:44:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79614 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:31.155 17:44:50 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:31.155 17:44:50 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:31.155 17:44:50 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:31.155 17:44:50 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:31.155 17:44:50 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:31.155 17:44:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:31.155 Initializing NVMe Controllers 00:10:31.155 Attaching to 0000:00:10.0 00:10:31.155 Attaching to 0000:00:11.0 00:10:31.155 Attached to 0000:00:11.0 00:10:31.155 Attached to 0000:00:10.0 00:10:31.155 Initialization complete. Starting I/O... 00:10:31.155 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:31.155 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:31.155 00:10:32.539 QEMU NVMe Ctrl (12341 ): 2750 I/Os completed (+2750) 00:10:32.539 QEMU NVMe Ctrl (12340 ): 2785 I/Os completed (+2785) 00:10:32.539 00:10:33.483 QEMU NVMe Ctrl (12341 ): 6463 I/Os completed (+3713) 00:10:33.483 QEMU NVMe Ctrl (12340 ): 6503 I/Os completed (+3718) 00:10:33.483 00:10:34.424 QEMU NVMe Ctrl (12341 ): 10669 I/Os completed (+4206) 00:10:34.424 QEMU NVMe Ctrl (12340 ): 10654 I/Os completed (+4151) 00:10:34.424 00:10:35.368 QEMU NVMe Ctrl (12341 ): 14529 I/Os completed (+3860) 00:10:35.368 QEMU NVMe Ctrl (12340 ): 14638 I/Os completed (+3984) 00:10:35.368 00:10:36.312 QEMU NVMe Ctrl (12341 ): 18333 I/Os completed (+3804) 00:10:36.312 QEMU NVMe Ctrl (12340 ): 18450 I/Os completed (+3812) 00:10:36.312 00:10:37.256 17:44:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:37.256 17:44:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.256 17:44:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.256 [2024-11-05 17:44:56.930872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:37.256 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:37.256 [2024-11-05 17:44:56.932356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.932408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.932429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.932442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:37.256 [2024-11-05 17:44:56.933822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.933860] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.933875] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.933886] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:37.256 EAL: Scan for (pci) bus failed. 00:10:37.256 17:44:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.256 17:44:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.256 [2024-11-05 17:44:56.952475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:37.256 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:37.256 [2024-11-05 17:44:56.953418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.953458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.953474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.953490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:37.256 [2024-11-05 17:44:56.954541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.954572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.954590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 [2024-11-05 17:44:56.954607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.256 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:37.256 EAL: Scan for (pci) bus failed. 00:10:37.256 17:44:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:37.256 17:44:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:37.256 17:44:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.256 17:44:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.256 17:44:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:37.256 00:10:37.256 17:44:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:37.256 17:44:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:37.256 17:44:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.256 17:44:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.256 17:44:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:37.256 Attaching to 0000:00:10.0 00:10:37.256 Attached to 0000:00:10.0 00:10:37.517 17:44:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:37.517 17:44:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:37.517 17:44:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:37.517 Attaching to 0000:00:11.0 00:10:37.517 Attached to 0000:00:11.0 00:10:38.456 QEMU NVMe Ctrl (12340 ): 3704 I/Os completed (+3704) 00:10:38.456 QEMU NVMe Ctrl (12341 ): 3371 I/Os completed (+3371) 00:10:38.456 00:10:39.389 QEMU NVMe Ctrl (12340 ): 8045 I/Os completed (+4341) 00:10:39.389 QEMU NVMe Ctrl (12341 ): 7689 I/Os completed (+4318) 00:10:39.389 00:10:40.337 QEMU NVMe Ctrl (12340 ): 12341 I/Os completed (+4296) 00:10:40.337 QEMU NVMe Ctrl (12341 ): 11964 I/Os completed (+4275) 00:10:40.337 00:10:41.270 QEMU NVMe Ctrl (12340 ): 16653 I/Os completed (+4312) 00:10:41.270 QEMU NVMe Ctrl (12341 ): 16260 I/Os completed (+4296) 00:10:41.270 00:10:42.203 QEMU NVMe Ctrl (12340 ): 20975 I/Os completed (+4322) 00:10:42.203 QEMU NVMe Ctrl (12341 ): 20530 I/Os completed (+4270) 00:10:42.203 00:10:43.146 QEMU NVMe Ctrl (12340 ): 24906 I/Os completed (+3931) 00:10:43.146 QEMU NVMe Ctrl (12341 ): 24417 I/Os completed (+3887) 00:10:43.146 00:10:44.526 QEMU NVMe Ctrl (12340 ): 28817 I/Os completed (+3911) 00:10:44.526 QEMU NVMe Ctrl (12341 ): 28341 I/Os completed (+3924) 00:10:44.526 00:10:45.460 QEMU NVMe Ctrl (12340 ): 33090 I/Os completed (+4273) 00:10:45.460 QEMU NVMe Ctrl (12341 ): 32587 I/Os completed (+4246) 00:10:45.460 00:10:46.425 QEMU NVMe Ctrl (12340 ): 37313 I/Os completed (+4223) 00:10:46.425 QEMU NVMe Ctrl (12341 ): 36840 I/Os completed (+4253) 00:10:46.425 00:10:47.359 QEMU NVMe Ctrl (12340 ): 41821 I/Os completed (+4508) 00:10:47.359 QEMU NVMe Ctrl (12341 ): 41330 I/Os completed (+4490) 00:10:47.359 00:10:48.299 QEMU NVMe Ctrl (12340 ): 45825 I/Os completed (+4004) 00:10:48.299 QEMU NVMe Ctrl (12341 ): 45276 I/Os completed (+3946) 00:10:48.299 00:10:49.238 QEMU NVMe Ctrl (12340 ): 49596 I/Os completed (+3771) 00:10:49.238 QEMU NVMe Ctrl (12341 ): 49283 I/Os completed (+4007) 00:10:49.238 00:10:49.499 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:49.499 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:49.499 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.499 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.499 [2024-11-05 17:45:09.266263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:49.499 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:49.499 [2024-11-05 17:45:09.267294] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.267335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.267353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.267369] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:49.500 [2024-11-05 17:45:09.268691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.268721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.268736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.268748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.500 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.500 [2024-11-05 17:45:09.290639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:49.500 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:49.500 [2024-11-05 17:45:09.291577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.291616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.291631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.291646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:49.500 [2024-11-05 17:45:09.292898] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.292935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.292947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 [2024-11-05 17:45:09.292960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.500 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:49.500 EAL: Scan for (pci) bus failed. 00:10:49.500 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:49.500 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:49.500 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.500 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.500 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:49.500 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:49.760 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.760 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.760 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.760 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:49.760 Attaching to 0000:00:10.0 00:10:49.760 Attached to 0000:00:10.0 00:10:49.760 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:49.760 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.760 17:45:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:49.760 Attaching to 0000:00:11.0 00:10:49.760 Attached to 0000:00:11.0 00:10:50.331 QEMU NVMe Ctrl (12340 ): 2280 I/Os completed (+2280) 00:10:50.331 QEMU NVMe Ctrl (12341 ): 1991 I/Os completed (+1991) 00:10:50.331 00:10:51.269 QEMU NVMe Ctrl (12340 ): 6288 I/Os completed (+4008) 00:10:51.269 QEMU NVMe Ctrl (12341 ): 5983 I/Os completed (+3992) 00:10:51.269 00:10:52.205 QEMU NVMe Ctrl (12340 ): 11470 I/Os completed (+5182) 00:10:52.205 QEMU NVMe Ctrl (12341 ): 11056 I/Os completed (+5073) 00:10:52.205 00:10:53.137 QEMU NVMe Ctrl (12340 ): 17096 I/Os completed (+5626) 00:10:53.138 QEMU NVMe Ctrl (12341 ): 16708 I/Os completed (+5652) 00:10:53.138 00:10:54.514 QEMU NVMe Ctrl (12340 ): 21742 I/Os completed (+4646) 00:10:54.514 QEMU NVMe Ctrl (12341 ): 21456 I/Os completed (+4748) 00:10:54.514 00:10:55.447 QEMU NVMe Ctrl (12340 ): 27456 I/Os completed (+5714) 00:10:55.447 QEMU NVMe Ctrl (12341 ): 27044 I/Os completed (+5588) 00:10:55.447 00:10:56.380 QEMU NVMe Ctrl (12340 ): 32420 I/Os completed (+4964) 00:10:56.380 QEMU NVMe Ctrl (12341 ): 31812 I/Os completed (+4768) 00:10:56.380 00:10:57.322 QEMU NVMe Ctrl (12340 ): 36502 I/Os completed (+4082) 00:10:57.322 QEMU NVMe Ctrl (12341 ): 35900 I/Os completed (+4088) 00:10:57.322 00:10:58.265 QEMU NVMe Ctrl (12340 ): 40132 I/Os completed (+3630) 00:10:58.265 QEMU NVMe Ctrl (12341 ): 39530 I/Os completed (+3630) 00:10:58.265 00:10:59.209 QEMU NVMe Ctrl (12340 ): 43155 I/Os completed (+3023) 00:10:59.209 QEMU NVMe Ctrl (12341 ): 42571 I/Os completed (+3041) 00:10:59.209 00:11:00.160 QEMU NVMe Ctrl (12340 ): 47538 I/Os completed (+4383) 00:11:00.160 QEMU NVMe Ctrl (12341 ): 46955 I/Os completed (+4384) 00:11:00.160 00:11:01.546 QEMU NVMe Ctrl (12340 ): 51926 I/Os completed (+4388) 00:11:01.546 QEMU NVMe Ctrl (12341 ): 51339 I/Os completed (+4384) 00:11:01.546 00:11:01.807 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:01.807 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:01.807 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:01.807 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:01.807 [2024-11-05 17:45:21.588407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:01.807 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:01.807 [2024-11-05 17:45:21.589253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.807 [2024-11-05 17:45:21.589287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.807 [2024-11-05 17:45:21.589304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.807 [2024-11-05 17:45:21.589314] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.807 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:01.807 [2024-11-05 17:45:21.590323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.807 [2024-11-05 17:45:21.590359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.807 [2024-11-05 17:45:21.590372] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.807 [2024-11-05 17:45:21.590382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.807 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:01.807 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:01.807 [2024-11-05 17:45:21.609639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:01.807 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:01.807 [2024-11-05 17:45:21.610446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.808 [2024-11-05 17:45:21.610482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.808 [2024-11-05 17:45:21.610496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.808 [2024-11-05 17:45:21.610509] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.808 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:01.808 [2024-11-05 17:45:21.611730] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.808 [2024-11-05 17:45:21.611767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.808 [2024-11-05 17:45:21.611778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.808 [2024-11-05 17:45:21.611791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.808 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/numa_node 00:11:01.808 PCI_BUS: Cannot open sysfs resource 00:11:01.808 PCI_BUS: pci_scan_one(): cannot parse resource 00:11:01.808 EAL: Scan for (pci) bus failed. 00:11:01.808 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:01.808 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:01.808 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.808 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.808 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:02.068 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:02.068 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.068 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.068 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.068 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:02.068 Attaching to 0000:00:10.0 00:11:02.068 Attached to 0000:00:10.0 00:11:02.068 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:02.068 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.068 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:02.068 Attaching to 0000:00:11.0 00:11:02.068 Attached to 0000:00:11.0 00:11:02.068 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:02.068 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:02.068 [2024-11-05 17:45:21.910350] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:14.299 17:45:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:14.299 17:45:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:14.299 17:45:33 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.98 00:11:14.299 17:45:33 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.98 00:11:14.299 17:45:33 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:14.299 17:45:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.98 00:11:14.299 17:45:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.98 2 00:11:14.299 remove_attach_helper took 42.98s to complete (handling 2 nvme drive(s)) 17:45:33 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:20.884 17:45:39 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79614 00:11:20.884 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79614) - No such process 00:11:20.884 17:45:39 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79614 00:11:20.884 17:45:39 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:20.884 17:45:39 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:20.884 17:45:39 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:20.884 17:45:39 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80163 00:11:20.884 17:45:39 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:20.884 17:45:39 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80163 00:11:20.884 17:45:39 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 80163 ']' 00:11:20.884 17:45:39 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.884 17:45:39 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:20.885 17:45:39 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.885 17:45:39 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:20.885 17:45:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.885 17:45:39 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:20.885 [2024-11-05 17:45:39.997863] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:11:20.885 [2024-11-05 17:45:39.997983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80163 ] 00:11:20.885 [2024-11-05 17:45:40.129245] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:20.885 [2024-11-05 17:45:40.158547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.885 [2024-11-05 17:45:40.200017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.885 17:45:40 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:20.885 17:45:40 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:11:20.885 17:45:40 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:20.885 17:45:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.885 17:45:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.885 17:45:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.885 17:45:40 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:20.885 17:45:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:20.885 17:45:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:20.885 17:45:40 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:20.885 17:45:40 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:20.885 17:45:40 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:20.885 17:45:40 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:20.885 17:45:40 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:20.885 17:45:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:20.885 17:45:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:20.885 17:45:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:20.885 17:45:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:20.885 17:45:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.617 17:45:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.617 17:45:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.617 17:45:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:27.617 17:45:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:27.617 [2024-11-05 17:45:46.947191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:27.617 [2024-11-05 17:45:46.948335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.617 [2024-11-05 17:45:46.948368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.617 [2024-11-05 17:45:46.948380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.617 [2024-11-05 17:45:46.948396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.617 [2024-11-05 17:45:46.948403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.617 [2024-11-05 17:45:46.948413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.617 [2024-11-05 17:45:46.948420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.617 [2024-11-05 17:45:46.948428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.617 [2024-11-05 17:45:46.948435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.617 [2024-11-05 17:45:46.948443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.617 [2024-11-05 17:45:46.948449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.617 [2024-11-05 17:45:46.948458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.617 [2024-11-05 17:45:47.347189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:27.617 [2024-11-05 17:45:47.348317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.617 [2024-11-05 17:45:47.348345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.617 [2024-11-05 17:45:47.348358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.617 [2024-11-05 17:45:47.348367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.617 [2024-11-05 17:45:47.348376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.617 [2024-11-05 17:45:47.348383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.617 [2024-11-05 17:45:47.348391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.617 [2024-11-05 17:45:47.348398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.617 [2024-11-05 17:45:47.348407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.617 [2024-11-05 17:45:47.348414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.617 [2024-11-05 17:45:47.348421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.617 [2024-11-05 17:45:47.348428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.617 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:27.617 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:27.617 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:27.617 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.617 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.617 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.617 17:45:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.617 17:45:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.617 17:45:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.617 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:27.617 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:27.617 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.617 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.617 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:27.878 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:27.878 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.878 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.878 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.878 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:27.878 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:27.878 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.878 17:45:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.106 17:45:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.106 17:45:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.106 17:45:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.106 17:45:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.106 17:45:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.106 [2024-11-05 17:45:59.847384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:40.106 [2024-11-05 17:45:59.848514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.106 [2024-11-05 17:45:59.848549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.106 [2024-11-05 17:45:59.848561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.106 [2024-11-05 17:45:59.848577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.106 [2024-11-05 17:45:59.848584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.106 [2024-11-05 17:45:59.848593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.106 [2024-11-05 17:45:59.848600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.106 [2024-11-05 17:45:59.848608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.106 [2024-11-05 17:45:59.848615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.106 [2024-11-05 17:45:59.848622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.106 [2024-11-05 17:45:59.848629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.106 [2024-11-05 17:45:59.848641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.106 17:45:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:40.106 17:45:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:40.367 [2024-11-05 17:46:00.247392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:40.367 [2024-11-05 17:46:00.248526] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.367 [2024-11-05 17:46:00.248554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.367 [2024-11-05 17:46:00.248568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.367 [2024-11-05 17:46:00.248581] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.367 [2024-11-05 17:46:00.248591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.367 [2024-11-05 17:46:00.248598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.367 [2024-11-05 17:46:00.248606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.367 [2024-11-05 17:46:00.248613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.367 [2024-11-05 17:46:00.248621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.367 [2024-11-05 17:46:00.248628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.367 [2024-11-05 17:46:00.248638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.367 [2024-11-05 17:46:00.248645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.628 17:46:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.628 17:46:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.628 17:46:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:40.628 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:40.888 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.888 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.888 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.888 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:40.889 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:40.889 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.889 17:46:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.145 17:46:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.145 17:46:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.145 17:46:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.145 17:46:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.145 17:46:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.145 17:46:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:53.145 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:53.145 [2024-11-05 17:46:12.847612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:53.145 [2024-11-05 17:46:12.848758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.145 [2024-11-05 17:46:12.848794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.145 [2024-11-05 17:46:12.848807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.145 [2024-11-05 17:46:12.848823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.145 [2024-11-05 17:46:12.848830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.145 [2024-11-05 17:46:12.848839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.145 [2024-11-05 17:46:12.848845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.145 [2024-11-05 17:46:12.848853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.145 [2024-11-05 17:46:12.848859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.145 [2024-11-05 17:46:12.848868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.145 [2024-11-05 17:46:12.848875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.145 [2024-11-05 17:46:12.848883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.407 [2024-11-05 17:46:13.247602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:53.407 [2024-11-05 17:46:13.248694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.407 [2024-11-05 17:46:13.248722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.407 [2024-11-05 17:46:13.248735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.407 [2024-11-05 17:46:13.248744] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.407 [2024-11-05 17:46:13.248755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.407 [2024-11-05 17:46:13.248762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.407 [2024-11-05 17:46:13.248772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.407 [2024-11-05 17:46:13.248779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.407 [2024-11-05 17:46:13.248787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.407 [2024-11-05 17:46:13.248793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.407 [2024-11-05 17:46:13.248801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.407 [2024-11-05 17:46:13.248808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.407 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:53.407 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:53.407 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:53.407 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.407 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.407 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.407 17:46:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.407 17:46:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.407 17:46:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.407 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:53.407 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:53.669 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.669 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.669 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:53.669 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:53.669 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.669 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.669 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.669 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:53.669 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:53.669 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.669 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.84 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.84 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.84 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.84 2 00:12:05.908 remove_attach_helper took 44.84s to complete (handling 2 nvme drive(s)) 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:05.908 17:46:25 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:05.908 17:46:25 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.502 17:46:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.502 17:46:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.502 17:46:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:12.502 17:46:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:12.502 [2024-11-05 17:46:31.815613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:12.502 [2024-11-05 17:46:31.816670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.502 [2024-11-05 17:46:31.816707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.502 [2024-11-05 17:46:31.816720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.502 [2024-11-05 17:46:31.816736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.502 [2024-11-05 17:46:31.816743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.502 [2024-11-05 17:46:31.816752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.502 [2024-11-05 17:46:31.816758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.502 [2024-11-05 17:46:31.816769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.502 [2024-11-05 17:46:31.816776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.502 [2024-11-05 17:46:31.816786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.502 [2024-11-05 17:46:31.816793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.502 [2024-11-05 17:46:31.816801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.502 [2024-11-05 17:46:32.215599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:12.503 [2024-11-05 17:46:32.218241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.503 [2024-11-05 17:46:32.218273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.503 [2024-11-05 17:46:32.218285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.503 [2024-11-05 17:46:32.218295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.503 [2024-11-05 17:46:32.218304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.503 [2024-11-05 17:46:32.218311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.503 [2024-11-05 17:46:32.218320] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.503 [2024-11-05 17:46:32.218326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.503 [2024-11-05 17:46:32.218335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.503 [2024-11-05 17:46:32.218341] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.503 [2024-11-05 17:46:32.218352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.503 [2024-11-05 17:46:32.218358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.503 17:46:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.503 17:46:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.503 17:46:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.503 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:12.762 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:12.763 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:12.763 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.983 17:46:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.983 17:46:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.983 17:46:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.983 [2024-11-05 17:46:44.615830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:24.983 [2024-11-05 17:46:44.617026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.983 [2024-11-05 17:46:44.617062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.983 [2024-11-05 17:46:44.617086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.983 [2024-11-05 17:46:44.617103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.983 [2024-11-05 17:46:44.617111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.983 [2024-11-05 17:46:44.617120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.983 [2024-11-05 17:46:44.617127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.983 [2024-11-05 17:46:44.617136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.983 [2024-11-05 17:46:44.617143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.983 [2024-11-05 17:46:44.617152] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.983 [2024-11-05 17:46:44.617159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.983 [2024-11-05 17:46:44.617168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.983 17:46:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.983 17:46:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.983 17:46:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:24.983 17:46:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:25.242 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:25.242 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:25.242 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:25.242 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:25.242 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:25.242 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:25.242 17:46:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.242 17:46:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.242 17:46:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.242 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:25.242 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:25.501 [2024-11-05 17:46:45.315830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:25.501 [2024-11-05 17:46:45.316893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.502 [2024-11-05 17:46:45.316925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.502 [2024-11-05 17:46:45.316939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.502 [2024-11-05 17:46:45.316951] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.502 [2024-11-05 17:46:45.316960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.502 [2024-11-05 17:46:45.316967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.502 [2024-11-05 17:46:45.316975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.502 [2024-11-05 17:46:45.316982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.502 [2024-11-05 17:46:45.316991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.502 [2024-11-05 17:46:45.316997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:25.502 [2024-11-05 17:46:45.317057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.502 [2024-11-05 17:46:45.317078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.763 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:25.763 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:25.763 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:25.763 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:25.763 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:25.763 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:25.763 17:46:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.763 17:46:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.763 17:46:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.024 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:26.024 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:26.024 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:26.024 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:26.024 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:26.024 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:26.024 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:26.024 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:26.024 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:26.024 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:26.024 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:26.024 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:26.024 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:38.250 17:46:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.250 17:46:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:38.250 17:46:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:38.250 17:46:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.250 17:46:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:38.250 17:46:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.250 [2024-11-05 17:46:58.116054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:38.250 [2024-11-05 17:46:58.117123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.250 [2024-11-05 17:46:58.117152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.250 [2024-11-05 17:46:58.117165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.250 [2024-11-05 17:46:58.117184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.250 [2024-11-05 17:46:58.117192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.250 [2024-11-05 17:46:58.117201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.250 [2024-11-05 17:46:58.117208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.250 [2024-11-05 17:46:58.117217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.250 [2024-11-05 17:46:58.117223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.250 [2024-11-05 17:46:58.117232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.250 [2024-11-05 17:46:58.117238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.250 [2024-11-05 17:46:58.117247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:38.250 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:38.815 [2024-11-05 17:46:58.516044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:38.815 [2024-11-05 17:46:58.517101] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.815 [2024-11-05 17:46:58.517126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.815 [2024-11-05 17:46:58.517138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.815 [2024-11-05 17:46:58.517148] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.815 [2024-11-05 17:46:58.517157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.815 [2024-11-05 17:46:58.517164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.815 [2024-11-05 17:46:58.517176] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.815 [2024-11-05 17:46:58.517183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.815 [2024-11-05 17:46:58.517192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.815 [2024-11-05 17:46:58.517198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.815 [2024-11-05 17:46:58.517206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.815 [2024-11-05 17:46:58.517213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.815 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:38.815 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:38.816 17:46:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.816 17:46:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:38.816 17:46:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:38.816 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:39.075 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:39.075 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:39.075 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:51.290 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:51.290 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:51.290 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:51.290 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:51.290 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:51.290 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.290 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.290 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.19 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.19 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:51.290 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:12:51.290 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:12:51.290 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:51.290 17:47:10 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80163 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 80163 ']' 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 80163 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80163 00:12:51.290 killing process with pid 80163 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80163' 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@971 -- # kill 80163 00:12:51.290 17:47:10 sw_hotplug -- common/autotest_common.sh@976 -- # wait 80163 00:12:51.290 17:47:11 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:51.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:52.116 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:52.116 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:52.116 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:52.116 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:52.378 00:12:52.378 real 2m28.563s 00:12:52.378 user 1m48.913s 00:12:52.378 sys 0m18.454s 00:12:52.378 17:47:12 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.378 ************************************ 00:12:52.378 END TEST sw_hotplug 00:12:52.378 ************************************ 00:12:52.378 17:47:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.378 17:47:12 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:52.378 17:47:12 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:52.378 17:47:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:52.378 17:47:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.378 17:47:12 -- common/autotest_common.sh@10 -- # set +x 00:12:52.378 ************************************ 00:12:52.378 START TEST nvme_xnvme 00:12:52.378 ************************************ 00:12:52.378 17:47:12 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:52.378 * Looking for test storage... 00:12:52.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:52.378 17:47:12 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:52.378 17:47:12 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:52.378 17:47:12 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:52.378 17:47:12 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:52.378 17:47:12 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.378 17:47:12 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.378 17:47:12 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.378 17:47:12 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.378 17:47:12 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.378 17:47:12 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.378 17:47:12 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.378 17:47:12 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:52.379 17:47:12 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.379 17:47:12 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.379 --rc genhtml_branch_coverage=1 00:12:52.379 --rc genhtml_function_coverage=1 00:12:52.379 --rc genhtml_legend=1 00:12:52.379 --rc geninfo_all_blocks=1 00:12:52.379 --rc geninfo_unexecuted_blocks=1 00:12:52.379 00:12:52.379 ' 00:12:52.379 17:47:12 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.379 --rc genhtml_branch_coverage=1 00:12:52.379 --rc genhtml_function_coverage=1 00:12:52.379 --rc genhtml_legend=1 00:12:52.379 --rc geninfo_all_blocks=1 00:12:52.379 --rc geninfo_unexecuted_blocks=1 00:12:52.379 00:12:52.379 ' 00:12:52.379 17:47:12 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.379 --rc genhtml_branch_coverage=1 00:12:52.379 --rc genhtml_function_coverage=1 00:12:52.379 --rc genhtml_legend=1 00:12:52.379 --rc geninfo_all_blocks=1 00:12:52.379 --rc geninfo_unexecuted_blocks=1 00:12:52.379 00:12:52.379 ' 00:12:52.379 17:47:12 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.379 --rc genhtml_branch_coverage=1 00:12:52.379 --rc genhtml_function_coverage=1 00:12:52.379 --rc genhtml_legend=1 00:12:52.379 --rc geninfo_all_blocks=1 00:12:52.379 --rc geninfo_unexecuted_blocks=1 00:12:52.379 00:12:52.379 ' 00:12:52.379 17:47:12 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.379 17:47:12 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.379 17:47:12 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.379 17:47:12 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.379 17:47:12 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.379 17:47:12 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:52.379 17:47:12 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.379 17:47:12 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:52.379 17:47:12 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:52.379 17:47:12 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.379 17:47:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.379 ************************************ 00:12:52.379 START TEST xnvme_to_malloc_dd_copy 00:12:52.379 ************************************ 00:12:52.379 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:12:52.379 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:52.379 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:52.379 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:52.640 17:47:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:52.640 { 00:12:52.640 "subsystems": [ 00:12:52.640 { 00:12:52.640 "subsystem": "bdev", 00:12:52.640 "config": [ 00:12:52.640 { 00:12:52.640 "params": { 00:12:52.640 "block_size": 512, 00:12:52.640 "num_blocks": 2097152, 00:12:52.641 "name": "malloc0" 00:12:52.641 }, 00:12:52.641 "method": "bdev_malloc_create" 00:12:52.641 }, 00:12:52.641 { 00:12:52.641 "params": { 00:12:52.641 "io_mechanism": "libaio", 00:12:52.641 "filename": "/dev/nullb0", 00:12:52.641 "name": "null0" 00:12:52.641 }, 00:12:52.641 "method": "bdev_xnvme_create" 00:12:52.641 }, 00:12:52.641 { 00:12:52.641 "method": "bdev_wait_for_examine" 00:12:52.641 } 00:12:52.641 ] 00:12:52.641 } 00:12:52.641 ] 00:12:52.641 } 00:12:52.641 [2024-11-05 17:47:12.467190] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:12:52.641 [2024-11-05 17:47:12.467485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81530 ] 00:12:52.641 [2024-11-05 17:47:12.603986] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:52.900 [2024-11-05 17:47:12.635154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.900 [2024-11-05 17:47:12.675730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.284  [2024-11-05T17:47:15.217Z] Copying: 219/1024 [MB] (219 MBps) [2024-11-05T17:47:16.598Z] Copying: 439/1024 [MB] (220 MBps) [2024-11-05T17:47:17.540Z] Copying: 689/1024 [MB] (249 MBps) [2024-11-05T17:47:17.540Z] Copying: 992/1024 [MB] (302 MBps) [2024-11-05T17:47:17.802Z] Copying: 1024/1024 [MB] (average 249 MBps) 00:12:57.811 00:12:57.811 17:47:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:57.811 17:47:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:57.811 17:47:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:57.811 17:47:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:57.811 { 00:12:57.811 "subsystems": [ 00:12:57.811 { 00:12:57.811 "subsystem": "bdev", 00:12:57.811 "config": [ 00:12:57.811 { 00:12:57.811 "params": { 00:12:57.811 "block_size": 512, 00:12:57.811 "num_blocks": 2097152, 00:12:57.811 "name": "malloc0" 00:12:57.811 }, 00:12:57.811 "method": "bdev_malloc_create" 00:12:57.811 }, 00:12:57.811 { 00:12:57.811 "params": { 00:12:57.811 "io_mechanism": "libaio", 00:12:57.811 "filename": "/dev/nullb0", 00:12:57.811 "name": "null0" 00:12:57.811 }, 00:12:57.811 "method": "bdev_xnvme_create" 00:12:57.811 }, 00:12:57.811 { 00:12:57.811 "method": "bdev_wait_for_examine" 00:12:57.811 } 00:12:57.811 ] 00:12:57.811 } 00:12:57.811 ] 00:12:57.811 } 00:12:57.811 [2024-11-05 17:47:17.766359] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:12:57.811 [2024-11-05 17:47:17.766505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81601 ] 00:12:58.072 [2024-11-05 17:47:17.899391] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:58.072 [2024-11-05 17:47:17.925407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.072 [2024-11-05 17:47:17.960800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.457  [2024-11-05T17:47:20.379Z] Copying: 304/1024 [MB] (304 MBps) [2024-11-05T17:47:21.310Z] Copying: 605/1024 [MB] (301 MBps) [2024-11-05T17:47:21.874Z] Copying: 912/1024 [MB] (306 MBps) [2024-11-05T17:47:22.131Z] Copying: 1024/1024 [MB] (average 304 MBps) 00:13:02.140 00:13:02.140 17:47:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:02.140 17:47:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:02.140 17:47:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:02.140 17:47:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:02.140 17:47:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:02.140 17:47:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:02.140 { 00:13:02.140 "subsystems": [ 00:13:02.140 { 00:13:02.140 "subsystem": "bdev", 00:13:02.140 "config": [ 00:13:02.140 { 00:13:02.140 "params": { 00:13:02.140 "block_size": 512, 00:13:02.140 "num_blocks": 2097152, 00:13:02.140 "name": "malloc0" 00:13:02.140 }, 00:13:02.140 "method": "bdev_malloc_create" 00:13:02.140 }, 00:13:02.140 { 00:13:02.140 "params": { 00:13:02.140 "io_mechanism": "io_uring", 00:13:02.140 "filename": "/dev/nullb0", 00:13:02.140 "name": "null0" 00:13:02.140 }, 00:13:02.140 "method": "bdev_xnvme_create" 00:13:02.140 }, 00:13:02.140 { 00:13:02.140 "method": "bdev_wait_for_examine" 00:13:02.140 } 00:13:02.140 ] 00:13:02.140 } 00:13:02.140 ] 00:13:02.140 } 00:13:02.140 [2024-11-05 17:47:22.119996] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:13:02.140 [2024-11-05 17:47:22.120132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81656 ] 00:13:02.396 [2024-11-05 17:47:22.250695] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:02.396 [2024-11-05 17:47:22.277105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.396 [2024-11-05 17:47:22.299526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.805  [2024-11-05T17:47:24.733Z] Copying: 312/1024 [MB] (312 MBps) [2024-11-05T17:47:25.673Z] Copying: 622/1024 [MB] (310 MBps) [2024-11-05T17:47:25.930Z] Copying: 928/1024 [MB] (305 MBps) [2024-11-05T17:47:26.498Z] Copying: 1024/1024 [MB] (average 309 MBps) 00:13:06.507 00:13:06.507 17:47:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:06.507 17:47:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:06.507 17:47:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:06.507 17:47:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:06.507 { 00:13:06.507 "subsystems": [ 00:13:06.507 { 00:13:06.507 "subsystem": "bdev", 00:13:06.507 "config": [ 00:13:06.507 { 00:13:06.507 "params": { 00:13:06.507 "block_size": 512, 00:13:06.507 "num_blocks": 2097152, 00:13:06.507 "name": "malloc0" 00:13:06.507 }, 00:13:06.507 "method": "bdev_malloc_create" 00:13:06.507 }, 00:13:06.507 { 00:13:06.507 "params": { 00:13:06.507 "io_mechanism": "io_uring", 00:13:06.507 "filename": "/dev/nullb0", 00:13:06.507 "name": "null0" 00:13:06.507 }, 00:13:06.507 "method": "bdev_xnvme_create" 00:13:06.507 }, 00:13:06.507 { 00:13:06.507 "method": "bdev_wait_for_examine" 00:13:06.507 } 00:13:06.507 ] 00:13:06.507 } 00:13:06.507 ] 00:13:06.507 } 00:13:06.507 [2024-11-05 17:47:26.370955] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:13:06.507 [2024-11-05 17:47:26.371103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81710 ] 00:13:06.768 [2024-11-05 17:47:26.502891] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:06.768 [2024-11-05 17:47:26.529148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.768 [2024-11-05 17:47:26.552761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.153  [2024-11-05T17:47:29.085Z] Copying: 317/1024 [MB] (317 MBps) [2024-11-05T17:47:30.026Z] Copying: 635/1024 [MB] (318 MBps) [2024-11-05T17:47:30.284Z] Copying: 953/1024 [MB] (317 MBps) [2024-11-05T17:47:30.542Z] Copying: 1024/1024 [MB] (average 317 MBps) 00:13:10.551 00:13:10.551 17:47:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:10.551 17:47:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:10.551 00:13:10.551 real 0m18.149s 00:13:10.551 user 0m14.733s 00:13:10.551 sys 0m2.917s 00:13:10.551 17:47:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:10.551 17:47:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:10.551 ************************************ 00:13:10.551 END TEST xnvme_to_malloc_dd_copy 00:13:10.551 ************************************ 00:13:10.809 17:47:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:10.809 17:47:30 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:10.809 17:47:30 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:10.809 17:47:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.809 ************************************ 00:13:10.809 START TEST xnvme_bdevperf 00:13:10.809 ************************************ 00:13:10.809 17:47:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:13:10.809 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:10.809 17:47:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:10.809 17:47:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:10.809 17:47:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:10.809 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:10.809 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:10.809 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:10.810 17:47:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:10.810 { 00:13:10.810 "subsystems": [ 00:13:10.810 { 00:13:10.810 "subsystem": "bdev", 00:13:10.810 "config": [ 00:13:10.810 { 00:13:10.810 "params": { 00:13:10.810 "io_mechanism": "libaio", 00:13:10.810 "filename": "/dev/nullb0", 00:13:10.810 "name": "null0" 00:13:10.810 }, 00:13:10.810 "method": "bdev_xnvme_create" 00:13:10.810 }, 00:13:10.810 { 00:13:10.810 "method": "bdev_wait_for_examine" 00:13:10.810 } 00:13:10.810 ] 00:13:10.810 } 00:13:10.810 ] 00:13:10.810 } 00:13:10.810 [2024-11-05 17:47:30.635991] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:13:10.810 [2024-11-05 17:47:30.636103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81789 ] 00:13:10.810 [2024-11-05 17:47:30.764503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:10.810 [2024-11-05 17:47:30.794370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.068 [2024-11-05 17:47:30.817672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.068 Running I/O for 5 seconds... 00:13:12.935 152640.00 IOPS, 596.25 MiB/s [2024-11-05T17:47:34.324Z] 164736.00 IOPS, 643.50 MiB/s [2024-11-05T17:47:35.259Z] 178645.33 IOPS, 697.83 MiB/s [2024-11-05T17:47:36.191Z] 185536.00 IOPS, 724.75 MiB/s 00:13:16.200 Latency(us) 00:13:16.200 [2024-11-05T17:47:36.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.200 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:16.200 null0 : 5.00 189750.79 741.21 0.00 0.00 334.94 103.19 2054.30 00:13:16.200 [2024-11-05T17:47:36.191Z] =================================================================================================================== 00:13:16.200 [2024-11-05T17:47:36.191Z] Total : 189750.79 741.21 0.00 0.00 334.94 103.19 2054.30 00:13:16.200 17:47:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:16.200 17:47:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:16.200 17:47:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:16.201 17:47:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:16.201 17:47:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:16.201 17:47:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:16.201 { 00:13:16.201 "subsystems": [ 00:13:16.201 { 00:13:16.201 "subsystem": "bdev", 00:13:16.201 "config": [ 00:13:16.201 { 00:13:16.201 "params": { 00:13:16.201 "io_mechanism": "io_uring", 00:13:16.201 "filename": "/dev/nullb0", 00:13:16.201 "name": "null0" 00:13:16.201 }, 00:13:16.201 "method": "bdev_xnvme_create" 00:13:16.201 }, 00:13:16.201 { 00:13:16.201 "method": "bdev_wait_for_examine" 00:13:16.201 } 00:13:16.201 ] 00:13:16.201 } 00:13:16.201 ] 00:13:16.201 } 00:13:16.201 [2024-11-05 17:47:36.147816] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:13:16.201 [2024-11-05 17:47:36.147926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81857 ] 00:13:16.458 [2024-11-05 17:47:36.276692] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:16.458 [2024-11-05 17:47:36.301495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.458 [2024-11-05 17:47:36.322363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.458 Running I/O for 5 seconds... 00:13:18.765 237248.00 IOPS, 926.75 MiB/s [2024-11-05T17:47:39.690Z] 236192.00 IOPS, 922.62 MiB/s [2024-11-05T17:47:40.622Z] 234218.67 IOPS, 914.92 MiB/s [2024-11-05T17:47:41.555Z] 233184.00 IOPS, 910.88 MiB/s 00:13:21.564 Latency(us) 00:13:21.564 [2024-11-05T17:47:41.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.564 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:21.564 null0 : 5.00 232565.08 908.46 0.00 0.00 272.88 151.24 1537.58 00:13:21.564 [2024-11-05T17:47:41.555Z] =================================================================================================================== 00:13:21.564 [2024-11-05T17:47:41.555Z] Total : 232565.08 908.46 0.00 0.00 272.88 151.24 1537.58 00:13:21.564 17:47:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:21.564 17:47:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:21.821 00:13:21.821 real 0m11.040s 00:13:21.821 user 0m8.707s 00:13:21.821 sys 0m2.094s 00:13:21.821 17:47:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:21.821 17:47:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:21.821 ************************************ 00:13:21.821 END TEST xnvme_bdevperf 00:13:21.821 ************************************ 00:13:21.821 00:13:21.821 real 0m29.465s 00:13:21.821 user 0m23.561s 00:13:21.821 sys 0m5.141s 00:13:21.821 17:47:41 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:21.821 17:47:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.821 ************************************ 00:13:21.821 END TEST nvme_xnvme 00:13:21.821 ************************************ 00:13:21.821 17:47:41 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:21.821 17:47:41 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:21.821 17:47:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:21.821 17:47:41 -- common/autotest_common.sh@10 -- # set +x 00:13:21.821 ************************************ 00:13:21.821 START TEST blockdev_xnvme 00:13:21.821 ************************************ 00:13:21.821 17:47:41 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:21.821 * Looking for test storage... 00:13:21.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:21.821 17:47:41 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:21.821 17:47:41 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:13:21.821 17:47:41 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:22.079 17:47:41 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.079 17:47:41 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:22.079 17:47:41 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.079 17:47:41 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:22.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.079 --rc genhtml_branch_coverage=1 00:13:22.079 --rc genhtml_function_coverage=1 00:13:22.079 --rc genhtml_legend=1 00:13:22.079 --rc geninfo_all_blocks=1 00:13:22.079 --rc geninfo_unexecuted_blocks=1 00:13:22.079 00:13:22.079 ' 00:13:22.079 17:47:41 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:22.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.079 --rc genhtml_branch_coverage=1 00:13:22.079 --rc genhtml_function_coverage=1 00:13:22.079 --rc genhtml_legend=1 00:13:22.079 --rc geninfo_all_blocks=1 00:13:22.079 --rc geninfo_unexecuted_blocks=1 00:13:22.079 00:13:22.079 ' 00:13:22.079 17:47:41 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:22.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.079 --rc genhtml_branch_coverage=1 00:13:22.079 --rc genhtml_function_coverage=1 00:13:22.079 --rc genhtml_legend=1 00:13:22.079 --rc geninfo_all_blocks=1 00:13:22.079 --rc geninfo_unexecuted_blocks=1 00:13:22.079 00:13:22.079 ' 00:13:22.079 17:47:41 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:22.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.079 --rc genhtml_branch_coverage=1 00:13:22.079 --rc genhtml_function_coverage=1 00:13:22.079 --rc genhtml_legend=1 00:13:22.079 --rc geninfo_all_blocks=1 00:13:22.079 --rc geninfo_unexecuted_blocks=1 00:13:22.079 00:13:22.079 ' 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:22.079 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81994 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 81994 00:13:22.080 17:47:41 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 81994 ']' 00:13:22.080 17:47:41 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.080 17:47:41 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:22.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.080 17:47:41 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.080 17:47:41 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:22.080 17:47:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.080 17:47:41 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:22.080 [2024-11-05 17:47:41.929393] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:13:22.080 [2024-11-05 17:47:41.929511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81994 ] 00:13:22.080 [2024-11-05 17:47:42.058209] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:22.338 [2024-11-05 17:47:42.086939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.338 [2024-11-05 17:47:42.110970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.903 17:47:42 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:22.903 17:47:42 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:13:22.903 17:47:42 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:22.903 17:47:42 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:22.903 17:47:42 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:22.903 17:47:42 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:22.903 17:47:42 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:23.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:23.418 Waiting for block devices as requested 00:13:23.418 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.418 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.418 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.675 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.004 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:29.004 17:47:48 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.004 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:29.005 nvme0n1 00:13:29.005 nvme1n1 00:13:29.005 nvme2n1 00:13:29.005 nvme2n2 00:13:29.005 nvme2n3 00:13:29.005 nvme3n1 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "68ba83e2-4fb8-4e5a-8bce-f1266d14c989"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "68ba83e2-4fb8-4e5a-8bce-f1266d14c989",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "86b4fa51-ac10-4a0a-81df-a47be24264a5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "86b4fa51-ac10-4a0a-81df-a47be24264a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "59a56005-5fdc-4fa6-a115-3c3bab0ad2b1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "59a56005-5fdc-4fa6-a115-3c3bab0ad2b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "5eb11ba0-090b-464f-8071-2b6a7396dd20"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5eb11ba0-090b-464f-8071-2b6a7396dd20",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "0c04ff5f-a9a7-4c9a-a262-d897fd5c7341"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0c04ff5f-a9a7-4c9a-a262-d897fd5c7341",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "66aa43ba-4684-4bca-af89-7534df016fec"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "66aa43ba-4684-4bca-af89-7534df016fec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:29.005 17:47:48 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 81994 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 81994 ']' 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 81994 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81994 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:29.005 killing process with pid 81994 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81994' 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 81994 00:13:29.005 17:47:48 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 81994 00:13:29.265 17:47:49 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:29.265 17:47:49 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:29.265 17:47:49 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:29.265 17:47:49 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:29.265 17:47:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.265 ************************************ 00:13:29.265 START TEST bdev_hello_world 00:13:29.265 ************************************ 00:13:29.266 17:47:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:29.266 [2024-11-05 17:47:49.133580] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:13:29.266 [2024-11-05 17:47:49.133701] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82336 ] 00:13:29.524 [2024-11-05 17:47:49.263810] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:29.524 [2024-11-05 17:47:49.294144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.525 [2024-11-05 17:47:49.331895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.788 [2024-11-05 17:47:49.589854] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:29.788 [2024-11-05 17:47:49.589932] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:29.788 [2024-11-05 17:47:49.589959] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:29.788 [2024-11-05 17:47:49.592500] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:29.788 [2024-11-05 17:47:49.593080] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:29.788 [2024-11-05 17:47:49.593115] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:29.788 [2024-11-05 17:47:49.593690] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:29.788 00:13:29.788 [2024-11-05 17:47:49.593722] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:30.049 00:13:30.049 real 0m0.771s 00:13:30.049 user 0m0.406s 00:13:30.049 sys 0m0.221s 00:13:30.049 17:47:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:30.049 17:47:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:30.049 ************************************ 00:13:30.049 END TEST bdev_hello_world 00:13:30.049 ************************************ 00:13:30.049 17:47:49 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:30.049 17:47:49 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:30.049 17:47:49 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:30.049 17:47:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.049 ************************************ 00:13:30.049 START TEST bdev_bounds 00:13:30.049 ************************************ 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=82367 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:30.049 Process bdevio pid: 82367 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 82367' 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 82367 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 82367 ']' 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:30.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:30.049 17:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:30.049 [2024-11-05 17:47:49.993813] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:13:30.049 [2024-11-05 17:47:49.993994] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82367 ] 00:13:30.307 [2024-11-05 17:47:50.132332] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:30.307 [2024-11-05 17:47:50.162111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.307 [2024-11-05 17:47:50.189901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.307 [2024-11-05 17:47:50.190204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.307 [2024-11-05 17:47:50.190322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.873 17:47:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:30.873 17:47:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:13:30.873 17:47:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:31.131 I/O targets: 00:13:31.131 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:31.131 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:31.131 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:31.131 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:31.131 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:31.131 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:31.131 00:13:31.131 00:13:31.131 CUnit - A unit testing framework for C - Version 2.1-3 00:13:31.131 http://cunit.sourceforge.net/ 00:13:31.131 00:13:31.131 00:13:31.131 Suite: bdevio tests on: nvme3n1 00:13:31.131 Test: blockdev write read block ...passed 00:13:31.131 Test: blockdev write zeroes read block ...passed 00:13:31.131 Test: blockdev write zeroes read no split ...passed 00:13:31.131 Test: blockdev write zeroes read split ...passed 00:13:31.131 Test: blockdev write zeroes read split partial ...passed 00:13:31.131 Test: blockdev reset ...passed 00:13:31.131 Test: blockdev write read 8 blocks ...passed 00:13:31.131 Test: blockdev write read size > 128k ...passed 00:13:31.131 Test: blockdev write read invalid size ...passed 00:13:31.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.131 Test: blockdev write read max offset ...passed 00:13:31.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.131 Test: blockdev writev readv 8 blocks ...passed 00:13:31.131 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.131 Test: blockdev writev readv block ...passed 00:13:31.131 Test: blockdev writev readv size > 128k ...passed 00:13:31.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.131 Test: blockdev comparev and writev ...passed 00:13:31.131 Test: blockdev nvme passthru rw ...passed 00:13:31.131 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.131 Test: blockdev nvme admin passthru ...passed 00:13:31.131 Test: blockdev copy ...passed 00:13:31.131 Suite: bdevio tests on: nvme2n3 00:13:31.131 Test: blockdev write read block ...passed 00:13:31.131 Test: blockdev write zeroes read block ...passed 00:13:31.131 Test: blockdev write zeroes read no split ...passed 00:13:31.131 Test: blockdev write zeroes read split ...passed 00:13:31.131 Test: blockdev write zeroes read split partial ...passed 00:13:31.131 Test: blockdev reset ...passed 00:13:31.131 Test: blockdev write read 8 blocks ...passed 00:13:31.131 Test: blockdev write read size > 128k ...passed 00:13:31.131 Test: blockdev write read invalid size ...passed 00:13:31.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.131 Test: blockdev write read max offset ...passed 00:13:31.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.131 Test: blockdev writev readv 8 blocks ...passed 00:13:31.131 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.131 Test: blockdev writev readv block ...passed 00:13:31.131 Test: blockdev writev readv size > 128k ...passed 00:13:31.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.131 Test: blockdev comparev and writev ...passed 00:13:31.131 Test: blockdev nvme passthru rw ...passed 00:13:31.131 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.131 Test: blockdev nvme admin passthru ...passed 00:13:31.131 Test: blockdev copy ...passed 00:13:31.131 Suite: bdevio tests on: nvme2n2 00:13:31.131 Test: blockdev write read block ...passed 00:13:31.131 Test: blockdev write zeroes read block ...passed 00:13:31.131 Test: blockdev write zeroes read no split ...passed 00:13:31.131 Test: blockdev write zeroes read split ...passed 00:13:31.131 Test: blockdev write zeroes read split partial ...passed 00:13:31.131 Test: blockdev reset ...passed 00:13:31.131 Test: blockdev write read 8 blocks ...passed 00:13:31.131 Test: blockdev write read size > 128k ...passed 00:13:31.131 Test: blockdev write read invalid size ...passed 00:13:31.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.131 Test: blockdev write read max offset ...passed 00:13:31.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.131 Test: blockdev writev readv 8 blocks ...passed 00:13:31.131 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.131 Test: blockdev writev readv block ...passed 00:13:31.131 Test: blockdev writev readv size > 128k ...passed 00:13:31.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.131 Test: blockdev comparev and writev ...passed 00:13:31.131 Test: blockdev nvme passthru rw ...passed 00:13:31.131 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.131 Test: blockdev nvme admin passthru ...passed 00:13:31.131 Test: blockdev copy ...passed 00:13:31.131 Suite: bdevio tests on: nvme2n1 00:13:31.131 Test: blockdev write read block ...passed 00:13:31.131 Test: blockdev write zeroes read block ...passed 00:13:31.131 Test: blockdev write zeroes read no split ...passed 00:13:31.131 Test: blockdev write zeroes read split ...passed 00:13:31.131 Test: blockdev write zeroes read split partial ...passed 00:13:31.131 Test: blockdev reset ...passed 00:13:31.131 Test: blockdev write read 8 blocks ...passed 00:13:31.131 Test: blockdev write read size > 128k ...passed 00:13:31.131 Test: blockdev write read invalid size ...passed 00:13:31.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.131 Test: blockdev write read max offset ...passed 00:13:31.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.131 Test: blockdev writev readv 8 blocks ...passed 00:13:31.131 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.131 Test: blockdev writev readv block ...passed 00:13:31.131 Test: blockdev writev readv size > 128k ...passed 00:13:31.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.131 Test: blockdev comparev and writev ...passed 00:13:31.132 Test: blockdev nvme passthru rw ...passed 00:13:31.132 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.132 Test: blockdev nvme admin passthru ...passed 00:13:31.132 Test: blockdev copy ...passed 00:13:31.132 Suite: bdevio tests on: nvme1n1 00:13:31.132 Test: blockdev write read block ...passed 00:13:31.132 Test: blockdev write zeroes read block ...passed 00:13:31.132 Test: blockdev write zeroes read no split ...passed 00:13:31.132 Test: blockdev write zeroes read split ...passed 00:13:31.132 Test: blockdev write zeroes read split partial ...passed 00:13:31.132 Test: blockdev reset ...passed 00:13:31.132 Test: blockdev write read 8 blocks ...passed 00:13:31.132 Test: blockdev write read size > 128k ...passed 00:13:31.132 Test: blockdev write read invalid size ...passed 00:13:31.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.132 Test: blockdev write read max offset ...passed 00:13:31.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.132 Test: blockdev writev readv 8 blocks ...passed 00:13:31.132 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.132 Test: blockdev writev readv block ...passed 00:13:31.132 Test: blockdev writev readv size > 128k ...passed 00:13:31.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.132 Test: blockdev comparev and writev ...passed 00:13:31.132 Test: blockdev nvme passthru rw ...passed 00:13:31.132 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.132 Test: blockdev nvme admin passthru ...passed 00:13:31.132 Test: blockdev copy ...passed 00:13:31.132 Suite: bdevio tests on: nvme0n1 00:13:31.132 Test: blockdev write read block ...passed 00:13:31.132 Test: blockdev write zeroes read block ...passed 00:13:31.132 Test: blockdev write zeroes read no split ...passed 00:13:31.132 Test: blockdev write zeroes read split ...passed 00:13:31.132 Test: blockdev write zeroes read split partial ...passed 00:13:31.132 Test: blockdev reset ...passed 00:13:31.132 Test: blockdev write read 8 blocks ...passed 00:13:31.132 Test: blockdev write read size > 128k ...passed 00:13:31.132 Test: blockdev write read invalid size ...passed 00:13:31.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.132 Test: blockdev write read max offset ...passed 00:13:31.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.132 Test: blockdev writev readv 8 blocks ...passed 00:13:31.132 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.132 Test: blockdev writev readv block ...passed 00:13:31.132 Test: blockdev writev readv size > 128k ...passed 00:13:31.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.132 Test: blockdev comparev and writev ...passed 00:13:31.132 Test: blockdev nvme passthru rw ...passed 00:13:31.132 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.132 Test: blockdev nvme admin passthru ...passed 00:13:31.132 Test: blockdev copy ...passed 00:13:31.132 00:13:31.132 Run Summary: Type Total Ran Passed Failed Inactive 00:13:31.132 suites 6 6 n/a 0 0 00:13:31.132 tests 138 138 138 0 0 00:13:31.132 asserts 780 780 780 0 n/a 00:13:31.132 00:13:31.132 Elapsed time = 0.419 seconds 00:13:31.132 0 00:13:31.389 17:47:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 82367 00:13:31.389 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 82367 ']' 00:13:31.389 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 82367 00:13:31.389 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:13:31.389 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:31.390 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82367 00:13:31.390 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:31.390 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:31.390 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82367' 00:13:31.390 killing process with pid 82367 00:13:31.390 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 82367 00:13:31.390 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 82367 00:13:31.390 17:47:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:31.390 00:13:31.390 real 0m1.420s 00:13:31.390 user 0m3.461s 00:13:31.390 sys 0m0.340s 00:13:31.390 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:31.390 17:47:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:31.390 ************************************ 00:13:31.390 END TEST bdev_bounds 00:13:31.390 ************************************ 00:13:31.648 17:47:51 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:31.648 17:47:51 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:31.648 17:47:51 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:31.648 17:47:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.648 ************************************ 00:13:31.648 START TEST bdev_nbd 00:13:31.648 ************************************ 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:31.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=82411 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 82411 /var/tmp/spdk-nbd.sock 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 82411 ']' 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:31.648 17:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:31.648 [2024-11-05 17:47:51.470685] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:13:31.648 [2024-11-05 17:47:51.471267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.648 [2024-11-05 17:47:51.602722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:31.648 [2024-11-05 17:47:51.631290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.906 [2024-11-05 17:47:51.655736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:32.471 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.728 1+0 records in 00:13:32.728 1+0 records out 00:13:32.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343636 s, 11.9 MB/s 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:32.728 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:32.729 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:32.729 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:32.729 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.987 1+0 records in 00:13:32.987 1+0 records out 00:13:32.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108913 s, 3.8 MB/s 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:32.987 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.247 1+0 records in 00:13:33.247 1+0 records out 00:13:33.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366121 s, 11.2 MB/s 00:13:33.247 17:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:33.247 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.248 1+0 records in 00:13:33.248 1+0 records out 00:13:33.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103675 s, 4.0 MB/s 00:13:33.248 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.509 1+0 records in 00:13:33.509 1+0 records out 00:13:33.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702446 s, 5.8 MB/s 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:33.509 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.770 1+0 records in 00:13:33.770 1+0 records out 00:13:33.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00163584 s, 2.5 MB/s 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:33.770 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:34.032 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd0", 00:13:34.032 "bdev_name": "nvme0n1" 00:13:34.032 }, 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd1", 00:13:34.032 "bdev_name": "nvme1n1" 00:13:34.032 }, 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd2", 00:13:34.032 "bdev_name": "nvme2n1" 00:13:34.032 }, 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd3", 00:13:34.032 "bdev_name": "nvme2n2" 00:13:34.032 }, 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd4", 00:13:34.032 "bdev_name": "nvme2n3" 00:13:34.032 }, 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd5", 00:13:34.032 "bdev_name": "nvme3n1" 00:13:34.032 } 00:13:34.032 ]' 00:13:34.032 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:34.032 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd0", 00:13:34.032 "bdev_name": "nvme0n1" 00:13:34.032 }, 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd1", 00:13:34.032 "bdev_name": "nvme1n1" 00:13:34.032 }, 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd2", 00:13:34.032 "bdev_name": "nvme2n1" 00:13:34.032 }, 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd3", 00:13:34.032 "bdev_name": "nvme2n2" 00:13:34.032 }, 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd4", 00:13:34.032 "bdev_name": "nvme2n3" 00:13:34.032 }, 00:13:34.032 { 00:13:34.032 "nbd_device": "/dev/nbd5", 00:13:34.032 "bdev_name": "nvme3n1" 00:13:34.032 } 00:13:34.032 ]' 00:13:34.032 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:34.032 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:34.032 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.032 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:34.032 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.032 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:34.032 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.032 17:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:34.294 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:34.294 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:34.294 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:34.294 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.294 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.294 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.294 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:34.294 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.294 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.294 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:34.552 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:34.552 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:34.552 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:34.552 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.552 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.552 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:34.552 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:34.552 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.553 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.553 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:34.812 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:34.812 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:34.812 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:34.812 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.812 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.812 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:34.812 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:34.812 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.812 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.812 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:35.073 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:35.073 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:35.073 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:35.073 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.073 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.073 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:35.073 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.073 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.073 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.073 17:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.335 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:35.597 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:35.858 /dev/nbd0 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.858 1+0 records in 00:13:35.858 1+0 records out 00:13:35.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111776 s, 3.7 MB/s 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:35.858 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:36.119 /dev/nbd1 00:13:36.119 17:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.119 1+0 records in 00:13:36.119 1+0 records out 00:13:36.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00145131 s, 2.8 MB/s 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:36.119 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.120 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.120 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:36.380 /dev/nbd10 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.380 1+0 records in 00:13:36.380 1+0 records out 00:13:36.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00088122 s, 4.6 MB/s 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.380 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:36.641 /dev/nbd11 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.641 1+0 records in 00:13:36.641 1+0 records out 00:13:36.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109229 s, 3.7 MB/s 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.641 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:36.903 /dev/nbd12 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.903 1+0 records in 00:13:36.903 1+0 records out 00:13:36.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105373 s, 3.9 MB/s 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.903 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:37.163 /dev/nbd13 00:13:37.163 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:37.163 17:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:37.163 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:13:37.163 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:37.163 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:37.163 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:37.163 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:13:37.163 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:37.163 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:37.163 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:37.163 17:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.163 1+0 records in 00:13:37.163 1+0 records out 00:13:37.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000940649 s, 4.4 MB/s 00:13:37.163 17:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.163 17:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:37.163 17:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.163 17:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:37.163 17:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:37.164 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.164 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:37.164 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:37.164 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.164 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:37.424 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd0", 00:13:37.424 "bdev_name": "nvme0n1" 00:13:37.424 }, 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd1", 00:13:37.424 "bdev_name": "nvme1n1" 00:13:37.424 }, 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd10", 00:13:37.424 "bdev_name": "nvme2n1" 00:13:37.424 }, 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd11", 00:13:37.424 "bdev_name": "nvme2n2" 00:13:37.424 }, 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd12", 00:13:37.424 "bdev_name": "nvme2n3" 00:13:37.424 }, 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd13", 00:13:37.424 "bdev_name": "nvme3n1" 00:13:37.424 } 00:13:37.424 ]' 00:13:37.424 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd0", 00:13:37.424 "bdev_name": "nvme0n1" 00:13:37.424 }, 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd1", 00:13:37.424 "bdev_name": "nvme1n1" 00:13:37.424 }, 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd10", 00:13:37.424 "bdev_name": "nvme2n1" 00:13:37.424 }, 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd11", 00:13:37.424 "bdev_name": "nvme2n2" 00:13:37.424 }, 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd12", 00:13:37.424 "bdev_name": "nvme2n3" 00:13:37.424 }, 00:13:37.424 { 00:13:37.424 "nbd_device": "/dev/nbd13", 00:13:37.424 "bdev_name": "nvme3n1" 00:13:37.424 } 00:13:37.424 ]' 00:13:37.424 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:37.424 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:37.424 /dev/nbd1 00:13:37.424 /dev/nbd10 00:13:37.424 /dev/nbd11 00:13:37.424 /dev/nbd12 00:13:37.424 /dev/nbd13' 00:13:37.424 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:37.424 /dev/nbd1 00:13:37.424 /dev/nbd10 00:13:37.424 /dev/nbd11 00:13:37.424 /dev/nbd12 00:13:37.424 /dev/nbd13' 00:13:37.424 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:37.424 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:37.424 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:37.424 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:37.425 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:37.425 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:37.425 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:37.425 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:37.425 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:37.425 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:37.425 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:37.425 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:37.425 256+0 records in 00:13:37.425 256+0 records out 00:13:37.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00886841 s, 118 MB/s 00:13:37.425 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.425 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:37.686 256+0 records in 00:13:37.686 256+0 records out 00:13:37.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228347 s, 4.6 MB/s 00:13:37.686 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.686 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:37.946 256+0 records in 00:13:37.947 256+0 records out 00:13:37.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.287004 s, 3.7 MB/s 00:13:37.947 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.947 17:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:38.207 256+0 records in 00:13:38.207 256+0 records out 00:13:38.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240076 s, 4.4 MB/s 00:13:38.207 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.207 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:38.468 256+0 records in 00:13:38.468 256+0 records out 00:13:38.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227604 s, 4.6 MB/s 00:13:38.468 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.468 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:38.728 256+0 records in 00:13:38.728 256+0 records out 00:13:38.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.221888 s, 4.7 MB/s 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:38.728 256+0 records in 00:13:38.728 256+0 records out 00:13:38.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200866 s, 5.2 MB/s 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.728 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:38.989 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:39.249 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:39.249 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:39.249 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.249 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.249 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:39.249 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.249 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.249 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.249 17:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:39.249 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:39.249 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:39.249 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:39.249 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.249 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.249 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:39.249 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.249 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.249 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.249 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:39.509 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:39.509 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:39.509 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:39.509 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.509 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.510 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:39.510 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.510 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.510 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.510 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:39.769 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:39.769 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:39.769 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:39.769 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.769 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.769 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:39.769 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.769 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.769 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.769 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:40.032 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:40.032 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:40.032 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:40.032 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.032 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.032 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:40.032 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.032 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.032 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.032 17:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:40.291 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:40.291 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:40.291 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:40.291 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.291 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.291 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:40.291 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.291 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.291 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:40.291 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:40.291 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:40.552 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:40.813 malloc_lvol_verify 00:13:40.813 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:41.074 9c493871-434d-431c-a04c-355c6a47a02e 00:13:41.074 17:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:41.074 763db280-9104-434f-8e7b-0c671c3e1f4f 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:41.336 /dev/nbd0 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:41.336 mke2fs 1.47.0 (5-Feb-2023) 00:13:41.336 Discarding device blocks: 0/4096 done 00:13:41.336 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:41.336 00:13:41.336 Allocating group tables: 0/1 done 00:13:41.336 Writing inode tables: 0/1 done 00:13:41.336 Creating journal (1024 blocks): done 00:13:41.336 Writing superblocks and filesystem accounting information: 0/1 done 00:13:41.336 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.336 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:41.596 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:41.596 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:41.596 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:41.596 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.596 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.596 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:41.596 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.596 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.596 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 82411 00:13:41.596 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 82411 ']' 00:13:41.596 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 82411 00:13:41.597 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:13:41.597 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:41.597 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82411 00:13:41.597 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:41.597 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:41.597 killing process with pid 82411 00:13:41.597 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82411' 00:13:41.597 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 82411 00:13:41.597 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 82411 00:13:41.858 17:48:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:41.858 00:13:41.858 real 0m10.313s 00:13:41.858 user 0m14.028s 00:13:41.858 sys 0m3.737s 00:13:41.858 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:41.858 17:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:41.858 ************************************ 00:13:41.858 END TEST bdev_nbd 00:13:41.858 ************************************ 00:13:41.858 17:48:01 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:41.858 17:48:01 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:41.858 17:48:01 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:41.858 17:48:01 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:41.858 17:48:01 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:41.858 17:48:01 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:41.858 17:48:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.858 ************************************ 00:13:41.858 START TEST bdev_fio 00:13:41.858 ************************************ 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:13:41.858 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:41.858 ************************************ 00:13:41.858 START TEST bdev_fio_rw_verify 00:13:41.858 ************************************ 00:13:41.858 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:41.859 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:41.859 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:13:41.859 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:41.859 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:13:41.859 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:41.859 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:13:41.859 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:13:41.859 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:13:41.859 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:13:41.859 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:41.859 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:13:42.119 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:42.119 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:42.119 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:13:42.119 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:42.119 17:48:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:42.119 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.119 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.119 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.119 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.119 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.119 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.119 fio-3.35 00:13:42.119 Starting 6 threads 00:13:54.358 00:13:54.358 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=82817: Tue Nov 5 17:48:12 2024 00:13:54.358 read: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(520MiB/10002msec) 00:13:54.358 slat (usec): min=2, max=2085, avg= 6.38, stdev=15.01 00:13:54.358 clat (usec): min=73, max=1141.6k, avg=1527.96, stdev=8873.72 00:13:54.358 lat (usec): min=75, max=1141.6k, avg=1534.34, stdev=8873.80 00:13:54.358 clat percentiles (usec): 00:13:54.358 | 50.000th=[ 1352], 99.000th=[ 3916], 99.900th=[ 5211], 00:13:54.358 | 99.990th=[ 7635], 99.999th=[1149240] 00:13:54.358 write: IOPS=13.6k, BW=53.3MiB/s (55.8MB/s)(533MiB/10002msec); 0 zone resets 00:13:54.358 slat (usec): min=9, max=3714, avg=39.64, stdev=137.79 00:13:54.358 clat (usec): min=80, max=16362, avg=1715.14, stdev=881.75 00:13:54.358 lat (usec): min=94, max=16398, avg=1754.79, stdev=895.50 00:13:54.358 clat percentiles (usec): 00:13:54.358 | 50.000th=[ 1565], 99.000th=[ 4424], 99.900th=[ 5866], 99.990th=[ 8029], 00:13:54.358 | 99.999th=[16319] 00:13:54.358 bw ( KiB/s): min=38653, max=87096, per=100.00%, avg=55389.26, stdev=2088.29, samples=113 00:13:54.358 iops : min= 9662, max=21773, avg=13846.15, stdev=522.00, samples=113 00:13:54.358 lat (usec) : 100=0.01%, 250=1.07%, 500=4.53%, 750=8.31%, 1000=11.87% 00:13:54.358 lat (msec) : 2=48.25%, 4=24.60%, 10=1.37%, 20=0.01%, 2000=0.01% 00:13:54.358 cpu : usr=48.42%, sys=30.35%, ctx=4586, majf=0, minf=13972 00:13:54.358 IO depths : 1=11.3%, 2=23.7%, 4=51.2%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.358 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.358 issued rwts: total=133096,136371,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.358 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:54.358 00:13:54.358 Run status group 0 (all jobs): 00:13:54.359 READ: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=520MiB (545MB), run=10002-10002msec 00:13:54.359 WRITE: bw=53.3MiB/s (55.8MB/s), 53.3MiB/s-53.3MiB/s (55.8MB/s-55.8MB/s), io=533MiB (559MB), run=10002-10002msec 00:13:54.359 ----------------------------------------------------- 00:13:54.359 Suppressions used: 00:13:54.359 count bytes template 00:13:54.359 6 48 /usr/src/fio/parse.c 00:13:54.359 3186 305856 /usr/src/fio/iolog.c 00:13:54.359 1 8 libtcmalloc_minimal.so 00:13:54.359 1 904 libcrypto.so 00:13:54.359 ----------------------------------------------------- 00:13:54.359 00:13:54.359 00:13:54.359 real 0m11.258s 00:13:54.359 user 0m29.838s 00:13:54.359 sys 0m18.580s 00:13:54.359 ************************************ 00:13:54.359 END TEST bdev_fio_rw_verify 00:13:54.359 ************************************ 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "68ba83e2-4fb8-4e5a-8bce-f1266d14c989"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "68ba83e2-4fb8-4e5a-8bce-f1266d14c989",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "86b4fa51-ac10-4a0a-81df-a47be24264a5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "86b4fa51-ac10-4a0a-81df-a47be24264a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "59a56005-5fdc-4fa6-a115-3c3bab0ad2b1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "59a56005-5fdc-4fa6-a115-3c3bab0ad2b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "5eb11ba0-090b-464f-8071-2b6a7396dd20"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5eb11ba0-090b-464f-8071-2b6a7396dd20",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "0c04ff5f-a9a7-4c9a-a262-d897fd5c7341"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0c04ff5f-a9a7-4c9a-a262-d897fd5c7341",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "66aa43ba-4684-4bca-af89-7534df016fec"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "66aa43ba-4684-4bca-af89-7534df016fec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:54.359 /home/vagrant/spdk_repo/spdk 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:54.359 ************************************ 00:13:54.359 00:13:54.359 real 0m11.441s 00:13:54.359 user 0m29.908s 00:13:54.359 sys 0m18.671s 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:54.359 17:48:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:54.359 END TEST bdev_fio 00:13:54.359 ************************************ 00:13:54.359 17:48:13 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:54.359 17:48:13 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:54.359 17:48:13 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:54.359 17:48:13 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:54.359 17:48:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:54.359 ************************************ 00:13:54.359 START TEST bdev_verify 00:13:54.359 ************************************ 00:13:54.359 17:48:13 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:54.359 [2024-11-05 17:48:13.354686] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:13:54.359 [2024-11-05 17:48:13.354879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82992 ] 00:13:54.359 [2024-11-05 17:48:13.493235] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:54.359 [2024-11-05 17:48:13.522651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:54.359 [2024-11-05 17:48:13.565574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.359 [2024-11-05 17:48:13.565626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.359 Running I/O for 5 seconds... 00:13:56.319 23424.00 IOPS, 91.50 MiB/s [2024-11-05T17:48:17.255Z] 23248.00 IOPS, 90.81 MiB/s [2024-11-05T17:48:18.199Z] 22953.67 IOPS, 89.66 MiB/s [2024-11-05T17:48:19.141Z] 23103.25 IOPS, 90.25 MiB/s [2024-11-05T17:48:19.141Z] 22937.60 IOPS, 89.60 MiB/s 00:13:59.150 Latency(us) 00:13:59.150 [2024-11-05T17:48:19.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.151 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0x0 length 0xa0000 00:13:59.151 nvme0n1 : 5.04 1803.45 7.04 0.00 0.00 70803.78 7410.61 74610.22 00:13:59.151 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0xa0000 length 0xa0000 00:13:59.151 nvme0n1 : 5.01 1710.52 6.68 0.00 0.00 74687.51 10284.11 93968.54 00:13:59.151 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0x0 length 0xbd0bd 00:13:59.151 nvme1n1 : 5.07 2271.90 8.87 0.00 0.00 55884.47 6024.27 59688.17 00:13:59.151 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:59.151 nvme1n1 : 5.06 2390.93 9.34 0.00 0.00 53294.47 4007.78 63721.16 00:13:59.151 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0x0 length 0x80000 00:13:59.151 nvme2n1 : 5.08 1865.20 7.29 0.00 0.00 68005.53 10536.17 65737.65 00:13:59.151 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0x80000 length 0x80000 00:13:59.151 nvme2n1 : 5.04 1828.69 7.14 0.00 0.00 69561.59 8166.79 73803.62 00:13:59.151 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0x0 length 0x80000 00:13:59.151 nvme2n2 : 5.06 1821.67 7.12 0.00 0.00 69426.29 10233.70 65737.65 00:13:59.151 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0x80000 length 0x80000 00:13:59.151 nvme2n2 : 5.07 1793.24 7.00 0.00 0.00 70800.77 9830.40 73803.62 00:13:59.151 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0x0 length 0x80000 00:13:59.151 nvme2n3 : 5.07 1817.05 7.10 0.00 0.00 69469.98 8570.09 63317.86 00:13:59.151 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0x80000 length 0x80000 00:13:59.151 nvme2n3 : 5.07 1792.52 7.00 0.00 0.00 70669.77 5772.21 65334.35 00:13:59.151 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0x0 length 0x20000 00:13:59.151 nvme3n1 : 5.09 1837.10 7.18 0.00 0.00 68644.24 3604.48 72593.72 00:13:59.151 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.151 Verification LBA range: start 0x20000 length 0x20000 00:13:59.151 nvme3n1 : 5.06 1769.08 6.91 0.00 0.00 71442.97 4889.99 66947.54 00:13:59.151 [2024-11-05T17:48:19.142Z] =================================================================================================================== 00:13:59.151 [2024-11-05T17:48:19.142Z] Total : 22701.34 88.68 0.00 0.00 67069.80 3604.48 93968.54 00:13:59.412 00:13:59.413 real 0m5.868s 00:13:59.413 user 0m9.423s 00:13:59.413 sys 0m1.424s 00:13:59.413 17:48:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:59.413 17:48:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 ************************************ 00:13:59.413 END TEST bdev_verify 00:13:59.413 ************************************ 00:13:59.413 17:48:19 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:59.413 17:48:19 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:59.413 17:48:19 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:59.413 17:48:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 ************************************ 00:13:59.413 START TEST bdev_verify_big_io 00:13:59.413 ************************************ 00:13:59.413 17:48:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:59.413 [2024-11-05 17:48:19.272175] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:13:59.413 [2024-11-05 17:48:19.272302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83082 ] 00:13:59.413 [2024-11-05 17:48:19.404622] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:59.674 [2024-11-05 17:48:19.434562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:59.674 [2024-11-05 17:48:19.477339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.674 [2024-11-05 17:48:19.477431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.935 Running I/O for 5 seconds... 00:14:05.203 856.00 IOPS, 53.50 MiB/s [2024-11-05T17:48:25.765Z] 2422.50 IOPS, 151.41 MiB/s [2024-11-05T17:48:26.025Z] 3010.67 IOPS, 188.17 MiB/s 00:14:06.034 Latency(us) 00:14:06.034 [2024-11-05T17:48:26.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.034 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.034 Verification LBA range: start 0x0 length 0xa000 00:14:06.034 nvme0n1 : 5.77 123.49 7.72 0.00 0.00 986690.08 52025.50 1664816.05 00:14:06.034 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.034 Verification LBA range: start 0xa000 length 0xa000 00:14:06.034 nvme0n1 : 5.86 136.54 8.53 0.00 0.00 912294.90 103244.41 1019538.51 00:14:06.034 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.034 Verification LBA range: start 0x0 length 0xbd0b 00:14:06.034 nvme1n1 : 5.92 138.09 8.63 0.00 0.00 873589.70 21979.77 1258291.20 00:14:06.034 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.034 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:06.034 nvme1n1 : 5.89 108.55 6.78 0.00 0.00 1108022.29 11292.36 2645637.91 00:14:06.034 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.034 Verification LBA range: start 0x0 length 0x8000 00:14:06.034 nvme2n1 : 5.89 158.93 9.93 0.00 0.00 740406.42 107277.39 683994.19 00:14:06.034 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.034 Verification LBA range: start 0x8000 length 0x8000 00:14:06.034 nvme2n1 : 5.87 141.76 8.86 0.00 0.00 822970.26 55655.19 1038896.84 00:14:06.034 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.034 Verification LBA range: start 0x0 length 0x8000 00:14:06.034 nvme2n2 : 5.90 116.68 7.29 0.00 0.00 974466.39 109697.18 2310093.59 00:14:06.034 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.035 Verification LBA range: start 0x8000 length 0x8000 00:14:06.035 nvme2n2 : 5.87 128.62 8.04 0.00 0.00 882070.36 60091.47 1793871.56 00:14:06.035 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.035 Verification LBA range: start 0x0 length 0x8000 00:14:06.035 nvme2n3 : 5.93 170.08 10.63 0.00 0.00 652702.85 25710.28 1309913.40 00:14:06.035 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.035 Verification LBA range: start 0x8000 length 0x8000 00:14:06.035 nvme2n3 : 5.88 139.22 8.70 0.00 0.00 792153.95 17845.96 1606741.07 00:14:06.035 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.035 Verification LBA range: start 0x0 length 0x2000 00:14:06.035 nvme3n1 : 5.94 126.57 7.91 0.00 0.00 849680.19 535.63 1780966.01 00:14:06.035 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.035 Verification LBA range: start 0x2000 length 0x2000 00:14:06.035 nvme3n1 : 5.94 166.03 10.38 0.00 0.00 645901.06 563.99 725937.23 00:14:06.035 [2024-11-05T17:48:26.026Z] =================================================================================================================== 00:14:06.035 [2024-11-05T17:48:26.026Z] Total : 1654.57 103.41 0.00 0.00 836493.18 535.63 2645637.91 00:14:06.295 00:14:06.295 real 0m6.872s 00:14:06.295 user 0m12.247s 00:14:06.295 sys 0m0.647s 00:14:06.295 17:48:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:06.295 17:48:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.295 ************************************ 00:14:06.295 END TEST bdev_verify_big_io 00:14:06.295 ************************************ 00:14:06.295 17:48:26 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:06.295 17:48:26 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:14:06.295 17:48:26 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:06.295 17:48:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:06.295 ************************************ 00:14:06.295 START TEST bdev_write_zeroes 00:14:06.295 ************************************ 00:14:06.295 17:48:26 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:06.295 [2024-11-05 17:48:26.227252] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:14:06.295 [2024-11-05 17:48:26.227419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83180 ] 00:14:06.555 [2024-11-05 17:48:26.363894] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:06.556 [2024-11-05 17:48:26.391210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.556 [2024-11-05 17:48:26.432707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.816 Running I/O for 1 seconds... 00:14:08.016 85984.00 IOPS, 335.88 MiB/s 00:14:08.016 Latency(us) 00:14:08.016 [2024-11-05T17:48:28.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.017 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.017 nvme0n1 : 1.01 14045.94 54.87 0.00 0.00 9104.05 6175.51 22383.06 00:14:08.017 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.017 nvme1n1 : 1.02 15512.61 60.60 0.00 0.00 8219.81 4713.55 16131.94 00:14:08.017 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.017 nvme2n1 : 1.02 14051.80 54.89 0.00 0.00 9030.08 6225.92 22584.71 00:14:08.017 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.017 nvme2n2 : 1.02 14035.60 54.83 0.00 0.00 9031.07 6225.92 22685.54 00:14:08.017 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.017 nvme2n3 : 1.02 14019.73 54.76 0.00 0.00 9032.17 6225.92 22685.54 00:14:08.017 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.017 nvme3n1 : 1.02 14003.41 54.70 0.00 0.00 9034.55 5671.38 22685.54 00:14:08.017 [2024-11-05T17:48:28.008Z] =================================================================================================================== 00:14:08.017 [2024-11-05T17:48:28.008Z] Total : 85669.08 334.64 0.00 0.00 8896.73 4713.55 22685.54 00:14:08.277 00:14:08.277 real 0m1.872s 00:14:08.277 user 0m1.136s 00:14:08.277 sys 0m0.536s 00:14:08.277 17:48:28 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:08.277 ************************************ 00:14:08.277 END TEST bdev_write_zeroes 00:14:08.277 ************************************ 00:14:08.277 17:48:28 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:08.277 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:08.277 17:48:28 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:14:08.277 17:48:28 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:08.277 17:48:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.277 ************************************ 00:14:08.277 START TEST bdev_json_nonenclosed 00:14:08.277 ************************************ 00:14:08.277 17:48:28 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:08.278 [2024-11-05 17:48:28.167860] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:14:08.278 [2024-11-05 17:48:28.168029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83222 ] 00:14:08.538 [2024-11-05 17:48:28.309721] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:08.538 [2024-11-05 17:48:28.338650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.538 [2024-11-05 17:48:28.379249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.539 [2024-11-05 17:48:28.379375] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:08.539 [2024-11-05 17:48:28.379402] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:08.539 [2024-11-05 17:48:28.379415] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:08.539 00:14:08.539 real 0m0.381s 00:14:08.539 user 0m0.158s 00:14:08.539 sys 0m0.118s 00:14:08.539 17:48:28 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:08.539 ************************************ 00:14:08.539 17:48:28 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:08.539 END TEST bdev_json_nonenclosed 00:14:08.539 ************************************ 00:14:08.539 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:08.539 17:48:28 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:14:08.539 17:48:28 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:08.539 17:48:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.800 ************************************ 00:14:08.800 START TEST bdev_json_nonarray 00:14:08.800 ************************************ 00:14:08.800 17:48:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:08.800 [2024-11-05 17:48:28.609915] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:14:08.800 [2024-11-05 17:48:28.610098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83248 ] 00:14:08.800 [2024-11-05 17:48:28.745417] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:08.800 [2024-11-05 17:48:28.774550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.062 [2024-11-05 17:48:28.814969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.062 [2024-11-05 17:48:28.815126] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:09.062 [2024-11-05 17:48:28.815150] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:09.062 [2024-11-05 17:48:28.815163] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:09.062 00:14:09.062 real 0m0.377s 00:14:09.062 user 0m0.141s 00:14:09.062 sys 0m0.132s 00:14:09.062 17:48:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:09.062 17:48:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:09.062 ************************************ 00:14:09.062 END TEST bdev_json_nonarray 00:14:09.062 ************************************ 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:09.062 17:48:28 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:09.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:19.622 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:19.622 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:19.622 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:19.622 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:19.622 ************************************ 00:14:19.622 END TEST blockdev_xnvme 00:14:19.622 ************************************ 00:14:19.622 00:14:19.622 real 0m57.069s 00:14:19.622 user 1m19.375s 00:14:19.622 sys 0m44.290s 00:14:19.622 17:48:38 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:19.622 17:48:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.622 17:48:38 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:19.622 17:48:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:19.622 17:48:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:19.622 17:48:38 -- common/autotest_common.sh@10 -- # set +x 00:14:19.622 ************************************ 00:14:19.622 START TEST ublk 00:14:19.622 ************************************ 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:19.622 * Looking for test storage... 00:14:19.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:19.622 17:48:38 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:19.622 17:48:38 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:19.622 17:48:38 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:19.622 17:48:38 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.622 17:48:38 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:19.622 17:48:38 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:19.622 17:48:38 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:19.622 17:48:38 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:19.622 17:48:38 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:19.622 17:48:38 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:19.622 17:48:38 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:19.622 17:48:38 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:19.622 17:48:38 ublk -- scripts/common.sh@345 -- # : 1 00:14:19.622 17:48:38 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:19.622 17:48:38 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.622 17:48:38 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:19.622 17:48:38 ublk -- scripts/common.sh@353 -- # local d=1 00:14:19.622 17:48:38 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.622 17:48:38 ublk -- scripts/common.sh@355 -- # echo 1 00:14:19.622 17:48:38 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:19.622 17:48:38 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:19.622 17:48:38 ublk -- scripts/common.sh@353 -- # local d=2 00:14:19.622 17:48:38 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.622 17:48:38 ublk -- scripts/common.sh@355 -- # echo 2 00:14:19.622 17:48:38 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:19.622 17:48:38 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:19.622 17:48:38 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:19.622 17:48:38 ublk -- scripts/common.sh@368 -- # return 0 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:19.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.622 --rc genhtml_branch_coverage=1 00:14:19.622 --rc genhtml_function_coverage=1 00:14:19.622 --rc genhtml_legend=1 00:14:19.622 --rc geninfo_all_blocks=1 00:14:19.622 --rc geninfo_unexecuted_blocks=1 00:14:19.622 00:14:19.622 ' 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:19.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.622 --rc genhtml_branch_coverage=1 00:14:19.622 --rc genhtml_function_coverage=1 00:14:19.622 --rc genhtml_legend=1 00:14:19.622 --rc geninfo_all_blocks=1 00:14:19.622 --rc geninfo_unexecuted_blocks=1 00:14:19.622 00:14:19.622 ' 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:19.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.622 --rc genhtml_branch_coverage=1 00:14:19.622 --rc genhtml_function_coverage=1 00:14:19.622 --rc genhtml_legend=1 00:14:19.622 --rc geninfo_all_blocks=1 00:14:19.622 --rc geninfo_unexecuted_blocks=1 00:14:19.622 00:14:19.622 ' 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:19.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.622 --rc genhtml_branch_coverage=1 00:14:19.622 --rc genhtml_function_coverage=1 00:14:19.622 --rc genhtml_legend=1 00:14:19.622 --rc geninfo_all_blocks=1 00:14:19.622 --rc geninfo_unexecuted_blocks=1 00:14:19.622 00:14:19.622 ' 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:19.622 17:48:38 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:19.622 17:48:38 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:19.622 17:48:38 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:19.622 17:48:38 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:19.622 17:48:38 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:19.622 17:48:38 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:19.622 17:48:38 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:19.622 17:48:38 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:19.622 17:48:38 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:19.622 17:48:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:19.622 ************************************ 00:14:19.622 START TEST test_save_ublk_config 00:14:19.622 ************************************ 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=83537 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 83537 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 83537 ']' 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:19.622 17:48:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:19.622 [2024-11-05 17:48:39.055696] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:14:19.622 [2024-11-05 17:48:39.055811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83537 ] 00:14:19.622 [2024-11-05 17:48:39.185422] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:19.622 [2024-11-05 17:48:39.214530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.622 [2024-11-05 17:48:39.238927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.880 17:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:19.880 17:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:14:19.880 17:48:39 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:19.880 17:48:39 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:19.880 17:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.880 17:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:19.880 [2024-11-05 17:48:39.858086] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:19.880 [2024-11-05 17:48:39.858800] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:20.139 malloc0 00:14:20.139 [2024-11-05 17:48:39.890201] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:20.139 [2024-11-05 17:48:39.890283] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:20.139 [2024-11-05 17:48:39.890300] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:20.139 [2024-11-05 17:48:39.890310] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:20.139 [2024-11-05 17:48:39.899175] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:20.139 [2024-11-05 17:48:39.899198] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:20.139 [2024-11-05 17:48:39.906093] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:20.139 [2024-11-05 17:48:39.906187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:20.139 [2024-11-05 17:48:39.923086] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:20.139 0 00:14:20.139 17:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.139 17:48:39 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:20.139 17:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.139 17:48:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:20.397 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.397 17:48:40 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:20.397 "subsystems": [ 00:14:20.397 { 00:14:20.397 "subsystem": "fsdev", 00:14:20.397 "config": [ 00:14:20.397 { 00:14:20.397 "method": "fsdev_set_opts", 00:14:20.397 "params": { 00:14:20.397 "fsdev_io_pool_size": 65535, 00:14:20.397 "fsdev_io_cache_size": 256 00:14:20.397 } 00:14:20.397 } 00:14:20.397 ] 00:14:20.397 }, 00:14:20.397 { 00:14:20.397 "subsystem": "keyring", 00:14:20.397 "config": [] 00:14:20.397 }, 00:14:20.397 { 00:14:20.397 "subsystem": "iobuf", 00:14:20.397 "config": [ 00:14:20.397 { 00:14:20.397 "method": "iobuf_set_options", 00:14:20.397 "params": { 00:14:20.397 "small_pool_count": 8192, 00:14:20.397 "large_pool_count": 1024, 00:14:20.397 "small_bufsize": 8192, 00:14:20.397 "large_bufsize": 135168, 00:14:20.397 "enable_numa": false 00:14:20.397 } 00:14:20.397 } 00:14:20.397 ] 00:14:20.397 }, 00:14:20.397 { 00:14:20.397 "subsystem": "sock", 00:14:20.397 "config": [ 00:14:20.397 { 00:14:20.397 "method": "sock_set_default_impl", 00:14:20.397 "params": { 00:14:20.397 "impl_name": "posix" 00:14:20.397 } 00:14:20.397 }, 00:14:20.397 { 00:14:20.397 "method": "sock_impl_set_options", 00:14:20.397 "params": { 00:14:20.397 "impl_name": "ssl", 00:14:20.397 "recv_buf_size": 4096, 00:14:20.397 "send_buf_size": 4096, 00:14:20.397 "enable_recv_pipe": true, 00:14:20.397 "enable_quickack": false, 00:14:20.397 "enable_placement_id": 0, 00:14:20.397 "enable_zerocopy_send_server": true, 00:14:20.397 "enable_zerocopy_send_client": false, 00:14:20.397 "zerocopy_threshold": 0, 00:14:20.397 "tls_version": 0, 00:14:20.397 "enable_ktls": false 00:14:20.397 } 00:14:20.397 }, 00:14:20.397 { 00:14:20.397 "method": "sock_impl_set_options", 00:14:20.397 "params": { 00:14:20.397 "impl_name": "posix", 00:14:20.397 "recv_buf_size": 2097152, 00:14:20.397 "send_buf_size": 2097152, 00:14:20.397 "enable_recv_pipe": true, 00:14:20.397 "enable_quickack": false, 00:14:20.397 "enable_placement_id": 0, 00:14:20.397 "enable_zerocopy_send_server": true, 00:14:20.397 "enable_zerocopy_send_client": false, 00:14:20.397 "zerocopy_threshold": 0, 00:14:20.397 "tls_version": 0, 00:14:20.397 "enable_ktls": false 00:14:20.397 } 00:14:20.397 } 00:14:20.397 ] 00:14:20.397 }, 00:14:20.397 { 00:14:20.397 "subsystem": "vmd", 00:14:20.397 "config": [] 00:14:20.397 }, 00:14:20.397 { 00:14:20.397 "subsystem": "accel", 00:14:20.397 "config": [ 00:14:20.397 { 00:14:20.397 "method": "accel_set_options", 00:14:20.397 "params": { 00:14:20.397 "small_cache_size": 128, 00:14:20.397 "large_cache_size": 16, 00:14:20.397 "task_count": 2048, 00:14:20.397 "sequence_count": 2048, 00:14:20.397 "buf_count": 2048 00:14:20.397 } 00:14:20.397 } 00:14:20.397 ] 00:14:20.397 }, 00:14:20.397 { 00:14:20.397 "subsystem": "bdev", 00:14:20.397 "config": [ 00:14:20.397 { 00:14:20.397 "method": "bdev_set_options", 00:14:20.397 "params": { 00:14:20.397 "bdev_io_pool_size": 65535, 00:14:20.397 "bdev_io_cache_size": 256, 00:14:20.397 "bdev_auto_examine": true, 00:14:20.397 "iobuf_small_cache_size": 128, 00:14:20.397 "iobuf_large_cache_size": 16 00:14:20.397 } 00:14:20.397 }, 00:14:20.397 { 00:14:20.397 "method": "bdev_raid_set_options", 00:14:20.397 "params": { 00:14:20.397 "process_window_size_kb": 1024, 00:14:20.397 "process_max_bandwidth_mb_sec": 0 00:14:20.397 } 00:14:20.397 }, 00:14:20.397 { 00:14:20.397 "method": "bdev_iscsi_set_options", 00:14:20.397 "params": { 00:14:20.397 "timeout_sec": 30 00:14:20.397 } 00:14:20.397 }, 00:14:20.397 { 00:14:20.397 "method": "bdev_nvme_set_options", 00:14:20.397 "params": { 00:14:20.397 "action_on_timeout": "none", 00:14:20.397 "timeout_us": 0, 00:14:20.397 "timeout_admin_us": 0, 00:14:20.397 "keep_alive_timeout_ms": 10000, 00:14:20.397 "arbitration_burst": 0, 00:14:20.397 "low_priority_weight": 0, 00:14:20.397 "medium_priority_weight": 0, 00:14:20.397 "high_priority_weight": 0, 00:14:20.397 "nvme_adminq_poll_period_us": 10000, 00:14:20.397 "nvme_ioq_poll_period_us": 0, 00:14:20.397 "io_queue_requests": 0, 00:14:20.397 "delay_cmd_submit": true, 00:14:20.397 "transport_retry_count": 4, 00:14:20.397 "bdev_retry_count": 3, 00:14:20.397 "transport_ack_timeout": 0, 00:14:20.397 "ctrlr_loss_timeout_sec": 0, 00:14:20.397 "reconnect_delay_sec": 0, 00:14:20.397 "fast_io_fail_timeout_sec": 0, 00:14:20.397 "disable_auto_failback": false, 00:14:20.397 "generate_uuids": false, 00:14:20.397 "transport_tos": 0, 00:14:20.397 "nvme_error_stat": false, 00:14:20.397 "rdma_srq_size": 0, 00:14:20.397 "io_path_stat": false, 00:14:20.397 "allow_accel_sequence": false, 00:14:20.397 "rdma_max_cq_size": 0, 00:14:20.397 "rdma_cm_event_timeout_ms": 0, 00:14:20.397 "dhchap_digests": [ 00:14:20.397 "sha256", 00:14:20.397 "sha384", 00:14:20.397 "sha512" 00:14:20.397 ], 00:14:20.397 "dhchap_dhgroups": [ 00:14:20.397 "null", 00:14:20.397 "ffdhe2048", 00:14:20.397 "ffdhe3072", 00:14:20.397 "ffdhe4096", 00:14:20.397 "ffdhe6144", 00:14:20.397 "ffdhe8192" 00:14:20.397 ] 00:14:20.397 } 00:14:20.397 }, 00:14:20.397 { 00:14:20.398 "method": "bdev_nvme_set_hotplug", 00:14:20.398 "params": { 00:14:20.398 "period_us": 100000, 00:14:20.398 "enable": false 00:14:20.398 } 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "method": "bdev_malloc_create", 00:14:20.398 "params": { 00:14:20.398 "name": "malloc0", 00:14:20.398 "num_blocks": 8192, 00:14:20.398 "block_size": 4096, 00:14:20.398 "physical_block_size": 4096, 00:14:20.398 "uuid": "0e42ca46-2c09-467a-8a30-a102a4d00417", 00:14:20.398 "optimal_io_boundary": 0, 00:14:20.398 "md_size": 0, 00:14:20.398 "dif_type": 0, 00:14:20.398 "dif_is_head_of_md": false, 00:14:20.398 "dif_pi_format": 0 00:14:20.398 } 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "method": "bdev_wait_for_examine" 00:14:20.398 } 00:14:20.398 ] 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "subsystem": "scsi", 00:14:20.398 "config": null 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "subsystem": "scheduler", 00:14:20.398 "config": [ 00:14:20.398 { 00:14:20.398 "method": "framework_set_scheduler", 00:14:20.398 "params": { 00:14:20.398 "name": "static" 00:14:20.398 } 00:14:20.398 } 00:14:20.398 ] 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "subsystem": "vhost_scsi", 00:14:20.398 "config": [] 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "subsystem": "vhost_blk", 00:14:20.398 "config": [] 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "subsystem": "ublk", 00:14:20.398 "config": [ 00:14:20.398 { 00:14:20.398 "method": "ublk_create_target", 00:14:20.398 "params": { 00:14:20.398 "cpumask": "1" 00:14:20.398 } 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "method": "ublk_start_disk", 00:14:20.398 "params": { 00:14:20.398 "bdev_name": "malloc0", 00:14:20.398 "ublk_id": 0, 00:14:20.398 "num_queues": 1, 00:14:20.398 "queue_depth": 128 00:14:20.398 } 00:14:20.398 } 00:14:20.398 ] 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "subsystem": "nbd", 00:14:20.398 "config": [] 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "subsystem": "nvmf", 00:14:20.398 "config": [ 00:14:20.398 { 00:14:20.398 "method": "nvmf_set_config", 00:14:20.398 "params": { 00:14:20.398 "discovery_filter": "match_any", 00:14:20.398 "admin_cmd_passthru": { 00:14:20.398 "identify_ctrlr": false 00:14:20.398 }, 00:14:20.398 "dhchap_digests": [ 00:14:20.398 "sha256", 00:14:20.398 "sha384", 00:14:20.398 "sha512" 00:14:20.398 ], 00:14:20.398 "dhchap_dhgroups": [ 00:14:20.398 "null", 00:14:20.398 "ffdhe2048", 00:14:20.398 "ffdhe3072", 00:14:20.398 "ffdhe4096", 00:14:20.398 "ffdhe6144", 00:14:20.398 "ffdhe8192" 00:14:20.398 ] 00:14:20.398 } 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "method": "nvmf_set_max_subsystems", 00:14:20.398 "params": { 00:14:20.398 "max_subsystems": 1024 00:14:20.398 } 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "method": "nvmf_set_crdt", 00:14:20.398 "params": { 00:14:20.398 "crdt1": 0, 00:14:20.398 "crdt2": 0, 00:14:20.398 "crdt3": 0 00:14:20.398 } 00:14:20.398 } 00:14:20.398 ] 00:14:20.398 }, 00:14:20.398 { 00:14:20.398 "subsystem": "iscsi", 00:14:20.398 "config": [ 00:14:20.398 { 00:14:20.398 "method": "iscsi_set_options", 00:14:20.398 "params": { 00:14:20.398 "node_base": "iqn.2016-06.io.spdk", 00:14:20.398 "max_sessions": 128, 00:14:20.398 "max_connections_per_session": 2, 00:14:20.398 "max_queue_depth": 64, 00:14:20.398 "default_time2wait": 2, 00:14:20.398 "default_time2retain": 20, 00:14:20.398 "first_burst_length": 8192, 00:14:20.398 "immediate_data": true, 00:14:20.398 "allow_duplicated_isid": false, 00:14:20.398 "error_recovery_level": 0, 00:14:20.398 "nop_timeout": 60, 00:14:20.398 "nop_in_interval": 30, 00:14:20.398 "disable_chap": false, 00:14:20.398 "require_chap": false, 00:14:20.398 "mutual_chap": false, 00:14:20.398 "chap_group": 0, 00:14:20.398 "max_large_datain_per_connection": 64, 00:14:20.398 "max_r2t_per_connection": 4, 00:14:20.398 "pdu_pool_size": 36864, 00:14:20.398 "immediate_data_pool_size": 16384, 00:14:20.398 "data_out_pool_size": 2048 00:14:20.398 } 00:14:20.398 } 00:14:20.398 ] 00:14:20.398 } 00:14:20.398 ] 00:14:20.398 }' 00:14:20.398 17:48:40 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 83537 00:14:20.398 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 83537 ']' 00:14:20.398 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 83537 00:14:20.398 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:14:20.398 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:20.398 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83537 00:14:20.398 killing process with pid 83537 00:14:20.398 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:20.398 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:20.398 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83537' 00:14:20.398 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 83537 00:14:20.398 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 83537 00:14:20.656 [2024-11-05 17:48:40.488260] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:20.656 [2024-11-05 17:48:40.539104] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:20.656 [2024-11-05 17:48:40.539237] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:20.656 [2024-11-05 17:48:40.540310] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:20.656 [2024-11-05 17:48:40.540363] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:20.656 [2024-11-05 17:48:40.540382] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:20.656 [2024-11-05 17:48:40.540402] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:20.656 [2024-11-05 17:48:40.540545] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:21.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.221 17:48:40 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=83575 00:14:21.221 17:48:40 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 83575 00:14:21.221 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 83575 ']' 00:14:21.221 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.221 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:21.221 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.221 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:21.221 17:48:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:21.221 17:48:40 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:21.221 17:48:40 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:21.221 "subsystems": [ 00:14:21.221 { 00:14:21.221 "subsystem": "fsdev", 00:14:21.221 "config": [ 00:14:21.221 { 00:14:21.221 "method": "fsdev_set_opts", 00:14:21.221 "params": { 00:14:21.221 "fsdev_io_pool_size": 65535, 00:14:21.221 "fsdev_io_cache_size": 256 00:14:21.221 } 00:14:21.221 } 00:14:21.221 ] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "keyring", 00:14:21.221 "config": [] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "iobuf", 00:14:21.221 "config": [ 00:14:21.221 { 00:14:21.221 "method": "iobuf_set_options", 00:14:21.221 "params": { 00:14:21.221 "small_pool_count": 8192, 00:14:21.221 "large_pool_count": 1024, 00:14:21.221 "small_bufsize": 8192, 00:14:21.221 "large_bufsize": 135168, 00:14:21.221 "enable_numa": false 00:14:21.221 } 00:14:21.221 } 00:14:21.221 ] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "sock", 00:14:21.221 "config": [ 00:14:21.221 { 00:14:21.221 "method": "sock_set_default_impl", 00:14:21.221 "params": { 00:14:21.221 "impl_name": "posix" 00:14:21.221 } 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "method": "sock_impl_set_options", 00:14:21.221 "params": { 00:14:21.221 "impl_name": "ssl", 00:14:21.221 "recv_buf_size": 4096, 00:14:21.221 "send_buf_size": 4096, 00:14:21.221 "enable_recv_pipe": true, 00:14:21.221 "enable_quickack": false, 00:14:21.221 "enable_placement_id": 0, 00:14:21.221 "enable_zerocopy_send_server": true, 00:14:21.221 "enable_zerocopy_send_client": false, 00:14:21.221 "zerocopy_threshold": 0, 00:14:21.221 "tls_version": 0, 00:14:21.221 "enable_ktls": false 00:14:21.221 } 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "method": "sock_impl_set_options", 00:14:21.221 "params": { 00:14:21.221 "impl_name": "posix", 00:14:21.221 "recv_buf_size": 2097152, 00:14:21.221 "send_buf_size": 2097152, 00:14:21.221 "enable_recv_pipe": true, 00:14:21.221 "enable_quickack": false, 00:14:21.221 "enable_placement_id": 0, 00:14:21.221 "enable_zerocopy_send_server": true, 00:14:21.221 "enable_zerocopy_send_client": false, 00:14:21.221 "zerocopy_threshold": 0, 00:14:21.221 "tls_version": 0, 00:14:21.221 "enable_ktls": false 00:14:21.221 } 00:14:21.221 } 00:14:21.221 ] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "vmd", 00:14:21.221 "config": [] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "accel", 00:14:21.221 "config": [ 00:14:21.221 { 00:14:21.221 "method": "accel_set_options", 00:14:21.221 "params": { 00:14:21.221 "small_cache_size": 128, 00:14:21.221 "large_cache_size": 16, 00:14:21.221 "task_count": 2048, 00:14:21.221 "sequence_count": 2048, 00:14:21.221 "buf_count": 2048 00:14:21.221 } 00:14:21.221 } 00:14:21.221 ] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "bdev", 00:14:21.221 "config": [ 00:14:21.221 { 00:14:21.221 "method": "bdev_set_options", 00:14:21.221 "params": { 00:14:21.221 "bdev_io_pool_size": 65535, 00:14:21.221 "bdev_io_cache_size": 256, 00:14:21.221 "bdev_auto_examine": true, 00:14:21.221 "iobuf_small_cache_size": 128, 00:14:21.221 "iobuf_large_cache_size": 16 00:14:21.221 } 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "method": "bdev_raid_set_options", 00:14:21.221 "params": { 00:14:21.221 "process_window_size_kb": 1024, 00:14:21.221 "process_max_bandwidth_mb_sec": 0 00:14:21.221 } 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "method": "bdev_iscsi_set_options", 00:14:21.221 "params": { 00:14:21.221 "timeout_sec": 30 00:14:21.221 } 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "method": "bdev_nvme_set_options", 00:14:21.221 "params": { 00:14:21.221 "action_on_timeout": "none", 00:14:21.221 "timeout_us": 0, 00:14:21.221 "timeout_admin_us": 0, 00:14:21.221 "keep_alive_timeout_ms": 10000, 00:14:21.221 "arbitration_burst": 0, 00:14:21.221 "low_priority_weight": 0, 00:14:21.221 "medium_priority_weight": 0, 00:14:21.221 "high_priority_weight": 0, 00:14:21.221 "nvme_adminq_poll_period_us": 10000, 00:14:21.221 "nvme_ioq_poll_period_us": 0, 00:14:21.221 "io_queue_requests": 0, 00:14:21.221 "delay_cmd_submit": true, 00:14:21.221 "transport_retry_count": 4, 00:14:21.221 "bdev_retry_count": 3, 00:14:21.221 "transport_ack_timeout": 0, 00:14:21.221 "ctrlr_loss_timeout_sec": 0, 00:14:21.221 "reconnect_delay_sec": 0, 00:14:21.221 "fast_io_fail_timeout_sec": 0, 00:14:21.221 "disable_auto_failback": false, 00:14:21.221 "generate_uuids": false, 00:14:21.221 "transport_tos": 0, 00:14:21.221 "nvme_error_stat": false, 00:14:21.221 "rdma_srq_size": 0, 00:14:21.221 "io_path_stat": false, 00:14:21.221 "allow_accel_sequence": false, 00:14:21.221 "rdma_max_cq_size": 0, 00:14:21.221 "rdma_cm_event_timeout_ms": 0, 00:14:21.221 "dhchap_digests": [ 00:14:21.221 "sha256", 00:14:21.221 "sha384", 00:14:21.221 "sha512" 00:14:21.221 ], 00:14:21.221 "dhchap_dhgroups": [ 00:14:21.221 "null", 00:14:21.221 "ffdhe2048", 00:14:21.221 "ffdhe3072", 00:14:21.221 "ffdhe4096", 00:14:21.221 "ffdhe6144", 00:14:21.221 "ffdhe8192" 00:14:21.221 ] 00:14:21.221 } 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "method": "bdev_nvme_set_hotplug", 00:14:21.221 "params": { 00:14:21.221 "period_us": 100000, 00:14:21.221 "enable": false 00:14:21.221 } 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "method": "bdev_malloc_create", 00:14:21.221 "params": { 00:14:21.221 "name": "malloc0", 00:14:21.221 "num_blocks": 8192, 00:14:21.221 "block_size": 4096, 00:14:21.221 "physical_block_size": 4096, 00:14:21.221 "uuid": "0e42ca46-2c09-467a-8a30-a102a4d00417", 00:14:21.221 "optimal_io_boundary": 0, 00:14:21.221 "md_size": 0, 00:14:21.221 "dif_type": 0, 00:14:21.221 "dif_is_head_of_md": false, 00:14:21.221 "dif_pi_format": 0 00:14:21.221 } 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "method": "bdev_wait_for_examine" 00:14:21.221 } 00:14:21.221 ] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "scsi", 00:14:21.221 "config": null 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "scheduler", 00:14:21.221 "config": [ 00:14:21.221 { 00:14:21.221 "method": "framework_set_scheduler", 00:14:21.221 "params": { 00:14:21.221 "name": "static" 00:14:21.221 } 00:14:21.221 } 00:14:21.221 ] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "vhost_scsi", 00:14:21.221 "config": [] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "vhost_blk", 00:14:21.221 "config": [] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "ublk", 00:14:21.221 "config": [ 00:14:21.221 { 00:14:21.221 "method": "ublk_create_target", 00:14:21.221 "params": { 00:14:21.221 "cpumask": "1" 00:14:21.221 } 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "method": "ublk_start_disk", 00:14:21.221 "params": { 00:14:21.221 "bdev_name": "malloc0", 00:14:21.221 "ublk_id": 0, 00:14:21.221 "num_queues": 1, 00:14:21.221 "queue_depth": 128 00:14:21.221 } 00:14:21.221 } 00:14:21.221 ] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "nbd", 00:14:21.221 "config": [] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "nvmf", 00:14:21.221 "config": [ 00:14:21.221 { 00:14:21.221 "method": "nvmf_set_config", 00:14:21.221 "params": { 00:14:21.221 "discovery_filter": "match_any", 00:14:21.221 "admin_cmd_passthru": { 00:14:21.221 "identify_ctrlr": false 00:14:21.221 }, 00:14:21.221 "dhchap_digests": [ 00:14:21.221 "sha256", 00:14:21.221 "sha384", 00:14:21.221 "sha512" 00:14:21.221 ], 00:14:21.221 "dhchap_dhgroups": [ 00:14:21.221 "null", 00:14:21.221 "ffdhe2048", 00:14:21.221 "ffdhe3072", 00:14:21.221 "ffdhe4096", 00:14:21.221 "ffdhe6144", 00:14:21.221 "ffdhe8192" 00:14:21.221 ] 00:14:21.221 } 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "method": "nvmf_set_max_subsystems", 00:14:21.221 "params": { 00:14:21.221 "max_subsystems": 1024 00:14:21.221 } 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "method": "nvmf_set_crdt", 00:14:21.221 "params": { 00:14:21.221 "crdt1": 0, 00:14:21.221 "crdt2": 0, 00:14:21.221 "crdt3": 0 00:14:21.221 } 00:14:21.221 } 00:14:21.221 ] 00:14:21.221 }, 00:14:21.221 { 00:14:21.221 "subsystem": "iscsi", 00:14:21.221 "config": [ 00:14:21.221 { 00:14:21.221 "method": "iscsi_set_options", 00:14:21.221 "params": { 00:14:21.221 "node_base": "iqn.2016-06.io.spdk", 00:14:21.221 "max_sessions": 128, 00:14:21.221 "max_connections_per_session": 2, 00:14:21.221 "max_queue_depth": 64, 00:14:21.221 "default_time2wait": 2, 00:14:21.221 "default_time2retain": 20, 00:14:21.221 "first_burst_length": 8192, 00:14:21.221 "immediate_data": true, 00:14:21.221 "allow_duplicated_isid": false, 00:14:21.221 "error_recovery_level": 0, 00:14:21.221 "nop_timeout": 60, 00:14:21.221 "nop_in_interval": 30, 00:14:21.221 "disable_chap": false, 00:14:21.221 "require_chap": false, 00:14:21.221 "mutual_chap": false, 00:14:21.221 "chap_group": 0, 00:14:21.221 "max_large_datain_per_connection": 64, 00:14:21.221 "max_r2t_per_connection": 4, 00:14:21.221 "pdu_pool_size": 36864, 00:14:21.221 "immediate_data_pool_size": 16384, 00:14:21.222 "data_out_pool_size": 2048 00:14:21.222 } 00:14:21.222 } 00:14:21.222 ] 00:14:21.222 } 00:14:21.222 ] 00:14:21.222 }' 00:14:21.222 [2024-11-05 17:48:40.985006] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:14:21.222 [2024-11-05 17:48:40.985120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83575 ] 00:14:21.222 [2024-11-05 17:48:41.114631] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:21.222 [2024-11-05 17:48:41.142890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.222 [2024-11-05 17:48:41.167210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.787 [2024-11-05 17:48:41.528086] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:21.787 [2024-11-05 17:48:41.528378] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:21.787 [2024-11-05 17:48:41.536210] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:21.787 [2024-11-05 17:48:41.536290] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:21.787 [2024-11-05 17:48:41.536303] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:21.787 [2024-11-05 17:48:41.536311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:21.787 [2024-11-05 17:48:41.545166] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:21.787 [2024-11-05 17:48:41.545192] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:21.787 [2024-11-05 17:48:41.551107] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:21.787 [2024-11-05 17:48:41.551203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:21.787 [2024-11-05 17:48:41.568087] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:22.046 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:22.046 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 83575 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 83575 ']' 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 83575 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83575 00:14:22.047 killing process with pid 83575 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83575' 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 83575 00:14:22.047 17:48:41 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 83575 00:14:22.305 [2024-11-05 17:48:42.101299] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:22.305 [2024-11-05 17:48:42.134177] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:22.305 [2024-11-05 17:48:42.134306] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:22.305 [2024-11-05 17:48:42.140096] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:22.305 [2024-11-05 17:48:42.140151] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:22.305 [2024-11-05 17:48:42.140160] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:22.305 [2024-11-05 17:48:42.140191] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:22.305 [2024-11-05 17:48:42.140336] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:22.564 17:48:42 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:22.564 00:14:22.564 real 0m3.538s 00:14:22.564 user 0m2.414s 00:14:22.564 sys 0m1.656s 00:14:22.564 17:48:42 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:22.564 ************************************ 00:14:22.564 END TEST test_save_ublk_config 00:14:22.564 ************************************ 00:14:22.564 17:48:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:22.822 17:48:42 ublk -- ublk/ublk.sh@139 -- # spdk_pid=83637 00:14:22.822 17:48:42 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:22.822 17:48:42 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:22.822 17:48:42 ublk -- ublk/ublk.sh@141 -- # waitforlisten 83637 00:14:22.822 17:48:42 ublk -- common/autotest_common.sh@833 -- # '[' -z 83637 ']' 00:14:22.822 17:48:42 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.822 17:48:42 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:22.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.822 17:48:42 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.822 17:48:42 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:22.822 17:48:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.822 [2024-11-05 17:48:42.636787] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:14:22.822 [2024-11-05 17:48:42.636911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83637 ] 00:14:22.822 [2024-11-05 17:48:42.766650] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:22.822 [2024-11-05 17:48:42.792141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:23.080 [2024-11-05 17:48:42.817804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.080 [2024-11-05 17:48:42.817836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.647 17:48:43 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:23.647 17:48:43 ublk -- common/autotest_common.sh@866 -- # return 0 00:14:23.647 17:48:43 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:23.647 17:48:43 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:23.647 17:48:43 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:23.647 17:48:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.647 ************************************ 00:14:23.647 START TEST test_create_ublk 00:14:23.647 ************************************ 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:14:23.647 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.647 [2024-11-05 17:48:43.487090] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:23.647 [2024-11-05 17:48:43.488433] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.647 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:23.647 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.647 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:23.647 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.647 [2024-11-05 17:48:43.559223] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:23.647 [2024-11-05 17:48:43.559614] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:23.647 [2024-11-05 17:48:43.559630] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:23.647 [2024-11-05 17:48:43.559638] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:23.647 [2024-11-05 17:48:43.568334] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:23.647 [2024-11-05 17:48:43.568354] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:23.647 [2024-11-05 17:48:43.575092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:23.647 [2024-11-05 17:48:43.575732] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:23.647 [2024-11-05 17:48:43.608098] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.647 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:23.647 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:23.647 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.647 17:48:43 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.647 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:23.647 { 00:14:23.647 "ublk_device": "/dev/ublkb0", 00:14:23.647 "id": 0, 00:14:23.647 "queue_depth": 512, 00:14:23.647 "num_queues": 4, 00:14:23.647 "bdev_name": "Malloc0" 00:14:23.647 } 00:14:23.647 ]' 00:14:23.647 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:23.906 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:23.906 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:23.906 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:23.906 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:23.906 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:23.906 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:23.906 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:23.906 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:23.906 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:23.906 17:48:43 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:23.906 17:48:43 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:23.906 17:48:43 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:23.906 17:48:43 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:23.906 17:48:43 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:23.906 17:48:43 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:23.906 17:48:43 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:23.906 17:48:43 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:23.906 17:48:43 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:23.906 17:48:43 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:23.906 17:48:43 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:23.906 17:48:43 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:23.906 fio: verification read phase will never start because write phase uses all of runtime 00:14:23.906 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:23.906 fio-3.35 00:14:23.906 Starting 1 process 00:14:36.106 00:14:36.106 fio_test: (groupid=0, jobs=1): err= 0: pid=83677: Tue Nov 5 17:48:54 2024 00:14:36.106 write: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(657MiB/10001msec); 0 zone resets 00:14:36.106 clat (usec): min=35, max=4398, avg=58.57, stdev=129.72 00:14:36.106 lat (usec): min=35, max=4399, avg=59.06, stdev=129.76 00:14:36.107 clat percentiles (usec): 00:14:36.107 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 45], 00:14:36.107 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 51], 60.00th=[ 53], 00:14:36.107 | 70.00th=[ 55], 80.00th=[ 58], 90.00th=[ 65], 95.00th=[ 74], 00:14:36.107 | 99.00th=[ 113], 99.50th=[ 165], 99.90th=[ 2802], 99.95th=[ 3752], 00:14:36.107 | 99.99th=[ 4047] 00:14:36.107 bw ( KiB/s): min=25253, max=81288, per=99.24%, avg=66791.00, stdev=15061.07, samples=19 00:14:36.107 iops : min= 6313, max=20322, avg=16697.84, stdev=3765.26, samples=19 00:14:36.107 lat (usec) : 50=46.81%, 100=51.75%, 250=1.14%, 500=0.07%, 750=0.02% 00:14:36.107 lat (usec) : 1000=0.02% 00:14:36.107 lat (msec) : 2=0.06%, 4=0.13%, 10=0.01% 00:14:36.107 cpu : usr=2.81%, sys=13.51%, ctx=168374, majf=0, minf=797 00:14:36.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:36.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.107 issued rwts: total=0,168276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:36.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:36.107 00:14:36.107 Run status group 0 (all jobs): 00:14:36.107 WRITE: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=657MiB (689MB), run=10001-10001msec 00:14:36.107 00:14:36.107 Disk stats (read/write): 00:14:36.107 ublkb0: ios=0/166249, merge=0/0, ticks=0/7921, in_queue=7922, util=98.84% 00:14:36.107 17:48:54 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 [2024-11-05 17:48:54.017396] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:36.107 [2024-11-05 17:48:54.061122] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:36.107 [2024-11-05 17:48:54.061702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:36.107 [2024-11-05 17:48:54.069102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:36.107 [2024-11-05 17:48:54.069339] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:36.107 [2024-11-05 17:48:54.069353] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 [2024-11-05 17:48:54.085170] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:36.107 request: 00:14:36.107 { 00:14:36.107 "ublk_id": 0, 00:14:36.107 "method": "ublk_stop_disk", 00:14:36.107 "req_id": 1 00:14:36.107 } 00:14:36.107 Got JSON-RPC error response 00:14:36.107 response: 00:14:36.107 { 00:14:36.107 "code": -19, 00:14:36.107 "message": "No such device" 00:14:36.107 } 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:36.107 17:48:54 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 [2024-11-05 17:48:54.101134] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:36.107 [2024-11-05 17:48:54.102552] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:36.107 [2024-11-05 17:48:54.102585] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:36.107 17:48:54 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:36.107 17:48:54 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:36.107 17:48:54 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:36.107 17:48:54 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:36.107 17:48:54 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:36.107 ************************************ 00:14:36.107 END TEST test_create_ublk 00:14:36.107 ************************************ 00:14:36.107 17:48:54 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:36.107 00:14:36.107 real 0m10.809s 00:14:36.107 user 0m0.572s 00:14:36.107 sys 0m1.432s 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 17:48:54 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:36.107 17:48:54 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:36.107 17:48:54 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.107 17:48:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 ************************************ 00:14:36.107 START TEST test_create_multi_ublk 00:14:36.107 ************************************ 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 [2024-11-05 17:48:54.337081] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:36.107 [2024-11-05 17:48:54.338236] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 [2024-11-05 17:48:54.421237] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:36.107 [2024-11-05 17:48:54.421547] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:36.107 [2024-11-05 17:48:54.421559] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:36.107 [2024-11-05 17:48:54.421567] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:36.107 [2024-11-05 17:48:54.441088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:36.107 [2024-11-05 17:48:54.441112] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:36.107 [2024-11-05 17:48:54.453085] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:36.107 [2024-11-05 17:48:54.453584] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:36.107 [2024-11-05 17:48:54.465342] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 [2024-11-05 17:48:54.571173] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:36.107 [2024-11-05 17:48:54.571486] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:36.107 [2024-11-05 17:48:54.571499] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:36.107 [2024-11-05 17:48:54.571504] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:36.107 [2024-11-05 17:48:54.583097] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:36.107 [2024-11-05 17:48:54.583115] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:36.107 [2024-11-05 17:48:54.595090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:36.107 [2024-11-05 17:48:54.595603] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:36.107 [2024-11-05 17:48:54.620088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 [2024-11-05 17:48:54.727180] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:36.107 [2024-11-05 17:48:54.727490] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:36.107 [2024-11-05 17:48:54.727502] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:36.107 [2024-11-05 17:48:54.727508] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:36.107 [2024-11-05 17:48:54.739097] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:36.107 [2024-11-05 17:48:54.739118] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:36.107 [2024-11-05 17:48:54.751091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:36.107 [2024-11-05 17:48:54.751608] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:36.107 [2024-11-05 17:48:54.776088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.107 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.108 [2024-11-05 17:48:54.883185] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:36.108 [2024-11-05 17:48:54.883493] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:36.108 [2024-11-05 17:48:54.883506] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:36.108 [2024-11-05 17:48:54.883511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:36.108 [2024-11-05 17:48:54.895105] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:36.108 [2024-11-05 17:48:54.895123] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:36.108 [2024-11-05 17:48:54.907092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:36.108 [2024-11-05 17:48:54.907599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:36.108 [2024-11-05 17:48:54.914133] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:36.108 { 00:14:36.108 "ublk_device": "/dev/ublkb0", 00:14:36.108 "id": 0, 00:14:36.108 "queue_depth": 512, 00:14:36.108 "num_queues": 4, 00:14:36.108 "bdev_name": "Malloc0" 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "ublk_device": "/dev/ublkb1", 00:14:36.108 "id": 1, 00:14:36.108 "queue_depth": 512, 00:14:36.108 "num_queues": 4, 00:14:36.108 "bdev_name": "Malloc1" 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "ublk_device": "/dev/ublkb2", 00:14:36.108 "id": 2, 00:14:36.108 "queue_depth": 512, 00:14:36.108 "num_queues": 4, 00:14:36.108 "bdev_name": "Malloc2" 00:14:36.108 }, 00:14:36.108 { 00:14:36.108 "ublk_device": "/dev/ublkb3", 00:14:36.108 "id": 3, 00:14:36.108 "queue_depth": 512, 00:14:36.108 "num_queues": 4, 00:14:36.108 "bdev_name": "Malloc3" 00:14:36.108 } 00:14:36.108 ]' 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:36.108 17:48:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.108 [2024-11-05 17:48:55.587155] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:36.108 [2024-11-05 17:48:55.628551] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:36.108 [2024-11-05 17:48:55.629489] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:36.108 [2024-11-05 17:48:55.639086] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:36.108 [2024-11-05 17:48:55.639310] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:36.108 [2024-11-05 17:48:55.639320] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.108 [2024-11-05 17:48:55.655157] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:36.108 [2024-11-05 17:48:55.691623] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:36.108 [2024-11-05 17:48:55.692437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:36.108 [2024-11-05 17:48:55.698092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:36.108 [2024-11-05 17:48:55.698312] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:36.108 [2024-11-05 17:48:55.698322] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.108 [2024-11-05 17:48:55.712156] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:36.108 [2024-11-05 17:48:55.743627] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:36.108 [2024-11-05 17:48:55.744396] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:36.108 [2024-11-05 17:48:55.750091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:36.108 [2024-11-05 17:48:55.750312] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:36.108 [2024-11-05 17:48:55.750319] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.108 [2024-11-05 17:48:55.763149] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:36.108 [2024-11-05 17:48:55.818115] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:36.108 [2024-11-05 17:48:55.818673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:36.108 [2024-11-05 17:48:55.826093] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:36.108 [2024-11-05 17:48:55.826300] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:36.108 [2024-11-05 17:48:55.826306] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.108 17:48:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:36.108 [2024-11-05 17:48:56.026176] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:36.109 [2024-11-05 17:48:56.027150] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:36.109 [2024-11-05 17:48:56.027181] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:36.109 17:48:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:36.109 17:48:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.109 17:48:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:36.109 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.109 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.367 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:36.625 00:14:36.625 real 0m2.131s 00:14:36.625 user 0m0.805s 00:14:36.625 sys 0m0.150s 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.625 17:48:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.625 ************************************ 00:14:36.625 END TEST test_create_multi_ublk 00:14:36.625 ************************************ 00:14:36.625 17:48:56 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:36.625 17:48:56 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:36.625 17:48:56 ublk -- ublk/ublk.sh@130 -- # killprocess 83637 00:14:36.625 17:48:56 ublk -- common/autotest_common.sh@952 -- # '[' -z 83637 ']' 00:14:36.626 17:48:56 ublk -- common/autotest_common.sh@956 -- # kill -0 83637 00:14:36.626 17:48:56 ublk -- common/autotest_common.sh@957 -- # uname 00:14:36.626 17:48:56 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:36.626 17:48:56 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83637 00:14:36.626 17:48:56 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:36.626 17:48:56 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:36.626 killing process with pid 83637 00:14:36.626 17:48:56 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83637' 00:14:36.626 17:48:56 ublk -- common/autotest_common.sh@971 -- # kill 83637 00:14:36.626 17:48:56 ublk -- common/autotest_common.sh@976 -- # wait 83637 00:14:36.884 [2024-11-05 17:48:56.739598] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:36.884 [2024-11-05 17:48:56.739658] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:37.142 00:14:37.142 real 0m18.151s 00:14:37.142 user 0m28.109s 00:14:37.142 sys 0m7.826s 00:14:37.142 17:48:56 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:37.142 ************************************ 00:14:37.142 END TEST ublk 00:14:37.142 ************************************ 00:14:37.142 17:48:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:37.142 17:48:56 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:37.142 17:48:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:37.142 17:48:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:37.143 17:48:56 -- common/autotest_common.sh@10 -- # set +x 00:14:37.143 ************************************ 00:14:37.143 START TEST ublk_recovery 00:14:37.143 ************************************ 00:14:37.143 17:48:57 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:37.143 * Looking for test storage... 00:14:37.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:37.143 17:48:57 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:37.143 17:48:57 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:14:37.143 17:48:57 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:37.143 17:48:57 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.143 17:48:57 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:37.401 17:48:57 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.402 17:48:57 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.402 17:48:57 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.402 17:48:57 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:37.402 17:48:57 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.402 17:48:57 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:37.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.402 --rc genhtml_branch_coverage=1 00:14:37.402 --rc genhtml_function_coverage=1 00:14:37.402 --rc genhtml_legend=1 00:14:37.402 --rc geninfo_all_blocks=1 00:14:37.402 --rc geninfo_unexecuted_blocks=1 00:14:37.402 00:14:37.402 ' 00:14:37.402 17:48:57 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:37.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.402 --rc genhtml_branch_coverage=1 00:14:37.402 --rc genhtml_function_coverage=1 00:14:37.402 --rc genhtml_legend=1 00:14:37.402 --rc geninfo_all_blocks=1 00:14:37.402 --rc geninfo_unexecuted_blocks=1 00:14:37.402 00:14:37.402 ' 00:14:37.402 17:48:57 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:37.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.402 --rc genhtml_branch_coverage=1 00:14:37.402 --rc genhtml_function_coverage=1 00:14:37.402 --rc genhtml_legend=1 00:14:37.402 --rc geninfo_all_blocks=1 00:14:37.402 --rc geninfo_unexecuted_blocks=1 00:14:37.402 00:14:37.402 ' 00:14:37.402 17:48:57 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:37.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.402 --rc genhtml_branch_coverage=1 00:14:37.402 --rc genhtml_function_coverage=1 00:14:37.402 --rc genhtml_legend=1 00:14:37.402 --rc geninfo_all_blocks=1 00:14:37.402 --rc geninfo_unexecuted_blocks=1 00:14:37.402 00:14:37.402 ' 00:14:37.402 17:48:57 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:37.402 17:48:57 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:37.402 17:48:57 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:37.402 17:48:57 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:37.402 17:48:57 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:37.402 17:48:57 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:37.402 17:48:57 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:37.402 17:48:57 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:37.402 17:48:57 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:37.402 17:48:57 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:37.402 17:48:57 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=84014 00:14:37.402 17:48:57 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:37.402 17:48:57 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 84014 00:14:37.402 17:48:57 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 84014 ']' 00:14:37.402 17:48:57 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.402 17:48:57 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:37.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.402 17:48:57 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.402 17:48:57 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:37.402 17:48:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.402 17:48:57 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:37.402 [2024-11-05 17:48:57.217353] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:14:37.402 [2024-11-05 17:48:57.217486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84014 ] 00:14:37.402 [2024-11-05 17:48:57.349358] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:37.402 [2024-11-05 17:48:57.373522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:37.660 [2024-11-05 17:48:57.397448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.660 [2024-11-05 17:48:57.397473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.226 17:48:58 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:38.226 17:48:58 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:38.226 17:48:58 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:38.226 17:48:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.226 17:48:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:38.226 [2024-11-05 17:48:58.044086] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:38.226 [2024-11-05 17:48:58.045344] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:38.226 17:48:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.226 17:48:58 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:38.226 17:48:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.226 17:48:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:38.226 malloc0 00:14:38.226 17:48:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.226 17:48:58 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:38.226 17:48:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.226 17:48:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:38.226 [2024-11-05 17:48:58.084208] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:38.226 [2024-11-05 17:48:58.084296] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:38.226 [2024-11-05 17:48:58.084310] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:38.226 [2024-11-05 17:48:58.084316] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:38.226 [2024-11-05 17:48:58.093187] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:38.226 [2024-11-05 17:48:58.093208] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:38.226 [2024-11-05 17:48:58.100090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:38.226 [2024-11-05 17:48:58.100212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:38.226 [2024-11-05 17:48:58.110094] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:38.226 1 00:14:38.226 17:48:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.226 17:48:58 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:39.159 17:48:59 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=84042 00:14:39.160 17:48:59 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:39.160 17:48:59 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:39.417 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:39.417 fio-3.35 00:14:39.417 Starting 1 process 00:14:44.727 17:49:04 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 84014 00:14:44.727 17:49:04 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:50.018 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 84014 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:50.018 17:49:09 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=84157 00:14:50.018 17:49:09 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:50.018 17:49:09 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 84157 00:14:50.018 17:49:09 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 84157 ']' 00:14:50.018 17:49:09 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.018 17:49:09 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:50.018 17:49:09 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.018 17:49:09 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:50.018 17:49:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:50.018 17:49:09 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:50.018 [2024-11-05 17:49:09.205363] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:14:50.018 [2024-11-05 17:49:09.205489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84157 ] 00:14:50.018 [2024-11-05 17:49:09.337051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:50.018 [2024-11-05 17:49:09.361651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:50.018 [2024-11-05 17:49:09.385396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.018 [2024-11-05 17:49:09.385479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.276 17:49:10 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:50.276 17:49:10 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:50.276 17:49:10 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:50.276 17:49:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.276 17:49:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:50.276 [2024-11-05 17:49:10.045096] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:50.276 [2024-11-05 17:49:10.046367] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:50.276 17:49:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.276 17:49:10 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:50.276 17:49:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.276 17:49:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:50.276 malloc0 00:14:50.276 17:49:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.276 17:49:10 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:50.276 17:49:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.276 17:49:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:50.276 [2024-11-05 17:49:10.085211] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:50.276 [2024-11-05 17:49:10.085249] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:50.276 [2024-11-05 17:49:10.085264] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:50.276 [2024-11-05 17:49:10.093115] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:50.276 [2024-11-05 17:49:10.093139] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:50.276 [2024-11-05 17:49:10.093146] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:50.276 [2024-11-05 17:49:10.093212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:50.276 1 00:14:50.276 17:49:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.276 17:49:10 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 84042 00:14:50.276 [2024-11-05 17:49:10.101090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:50.276 [2024-11-05 17:49:10.107691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:50.276 [2024-11-05 17:49:10.115272] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:50.276 [2024-11-05 17:49:10.115292] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:46.534 00:15:46.534 fio_test: (groupid=0, jobs=1): err= 0: pid=84050: Tue Nov 5 17:49:59 2024 00:15:46.534 read: IOPS=27.2k, BW=106MiB/s (111MB/s)(6372MiB/60002msec) 00:15:46.534 slat (nsec): min=943, max=256454, avg=4946.34, stdev=1580.00 00:15:46.534 clat (usec): min=648, max=5998.8k, avg=2318.49, stdev=38142.08 00:15:46.534 lat (usec): min=653, max=5998.8k, avg=2323.44, stdev=38142.08 00:15:46.534 clat percentiles (usec): 00:15:46.534 | 1.00th=[ 1631], 5.00th=[ 1729], 10.00th=[ 1745], 20.00th=[ 1778], 00:15:46.534 | 30.00th=[ 1795], 40.00th=[ 1811], 50.00th=[ 1827], 60.00th=[ 1860], 00:15:46.534 | 70.00th=[ 2114], 80.00th=[ 2343], 90.00th=[ 2671], 95.00th=[ 2966], 00:15:46.534 | 99.00th=[ 4817], 99.50th=[ 5145], 99.90th=[ 6063], 99.95th=[ 7177], 00:15:46.534 | 99.99th=[12649] 00:15:46.534 bw ( KiB/s): min=27624, max=134784, per=100.00%, avg=119679.33, stdev=18839.52, samples=108 00:15:46.534 iops : min= 6906, max=33696, avg=29919.83, stdev=4709.88, samples=108 00:15:46.534 write: IOPS=27.2k, BW=106MiB/s (111MB/s)(6366MiB/60002msec); 0 zone resets 00:15:46.534 slat (nsec): min=1110, max=1187.0k, avg=5008.57, stdev=1946.14 00:15:46.534 clat (usec): min=581, max=5999.1k, avg=2381.01, stdev=36985.46 00:15:46.534 lat (usec): min=585, max=5999.1k, avg=2386.01, stdev=36985.47 00:15:46.534 clat percentiles (usec): 00:15:46.534 | 1.00th=[ 1680], 5.00th=[ 1811], 10.00th=[ 1827], 20.00th=[ 1860], 00:15:46.534 | 30.00th=[ 1876], 40.00th=[ 1893], 50.00th=[ 1926], 60.00th=[ 1942], 00:15:46.534 | 70.00th=[ 2212], 80.00th=[ 2442], 90.00th=[ 2769], 95.00th=[ 2933], 00:15:46.534 | 99.00th=[ 4817], 99.50th=[ 5145], 99.90th=[ 6128], 99.95th=[ 7111], 00:15:46.534 | 99.99th=[12518] 00:15:46.534 bw ( KiB/s): min=26496, max=134336, per=100.00%, avg=119587.19, stdev=18975.82, samples=108 00:15:46.534 iops : min= 6624, max=33584, avg=29896.80, stdev=4743.95, samples=108 00:15:46.534 lat (usec) : 750=0.01%, 1000=0.01% 00:15:46.534 lat (msec) : 2=67.78%, 4=29.92%, 10=2.28%, 20=0.01%, >=2000=0.01% 00:15:46.534 cpu : usr=6.14%, sys=27.74%, ctx=110645, majf=0, minf=13 00:15:46.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:46.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:46.534 issued rwts: total=1631209,1629739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:46.534 00:15:46.534 Run status group 0 (all jobs): 00:15:46.534 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=6372MiB (6681MB), run=60002-60002msec 00:15:46.534 WRITE: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=6366MiB (6675MB), run=60002-60002msec 00:15:46.534 00:15:46.534 Disk stats (read/write): 00:15:46.534 ublkb1: ios=1627608/1626215, merge=0/0, ticks=3689833/3656797, in_queue=7346630, util=99.90% 00:15:46.534 17:49:59 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.534 [2024-11-05 17:49:59.375904] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:46.534 [2024-11-05 17:49:59.413176] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:46.534 [2024-11-05 17:49:59.413311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:46.534 [2024-11-05 17:49:59.418095] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:46.534 [2024-11-05 17:49:59.418180] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:46.534 [2024-11-05 17:49:59.418189] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.534 17:49:59 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.534 [2024-11-05 17:49:59.433153] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:46.534 [2024-11-05 17:49:59.434028] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:46.534 [2024-11-05 17:49:59.434056] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.534 17:49:59 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:46.534 17:49:59 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:46.534 17:49:59 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 84157 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 84157 ']' 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 84157 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84157 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84157' 00:15:46.534 killing process with pid 84157 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@971 -- # kill 84157 00:15:46.534 17:49:59 ublk_recovery -- common/autotest_common.sh@976 -- # wait 84157 00:15:46.534 [2024-11-05 17:49:59.692332] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:46.534 [2024-11-05 17:49:59.692393] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:46.534 00:15:46.534 real 1m3.031s 00:15:46.534 user 1m41.396s 00:15:46.534 sys 0m34.502s 00:15:46.534 17:50:00 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:46.534 ************************************ 00:15:46.534 END TEST ublk_recovery 00:15:46.534 ************************************ 00:15:46.534 17:50:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:46.534 17:50:00 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:46.534 17:50:00 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:46.534 17:50:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:46.534 17:50:00 -- common/autotest_common.sh@10 -- # set +x 00:15:46.534 17:50:00 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:46.534 17:50:00 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:46.534 17:50:00 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:46.534 17:50:00 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:46.534 17:50:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:46.534 17:50:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:46.534 17:50:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:46.534 17:50:00 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:46.534 17:50:00 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:46.534 17:50:00 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:46.534 17:50:00 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:46.534 17:50:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:46.534 17:50:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:46.534 17:50:00 -- common/autotest_common.sh@10 -- # set +x 00:15:46.534 ************************************ 00:15:46.534 START TEST ftl 00:15:46.534 ************************************ 00:15:46.534 17:50:00 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:46.534 * Looking for test storage... 00:15:46.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.534 17:50:00 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:46.534 17:50:00 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:15:46.534 17:50:00 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:46.534 17:50:00 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:46.534 17:50:00 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.534 17:50:00 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.534 17:50:00 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.534 17:50:00 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.534 17:50:00 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.534 17:50:00 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.534 17:50:00 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.534 17:50:00 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.534 17:50:00 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.534 17:50:00 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.534 17:50:00 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.534 17:50:00 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:46.534 17:50:00 ftl -- scripts/common.sh@345 -- # : 1 00:15:46.534 17:50:00 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.534 17:50:00 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.534 17:50:00 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:46.534 17:50:00 ftl -- scripts/common.sh@353 -- # local d=1 00:15:46.534 17:50:00 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.534 17:50:00 ftl -- scripts/common.sh@355 -- # echo 1 00:15:46.534 17:50:00 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.534 17:50:00 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:46.534 17:50:00 ftl -- scripts/common.sh@353 -- # local d=2 00:15:46.534 17:50:00 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.534 17:50:00 ftl -- scripts/common.sh@355 -- # echo 2 00:15:46.534 17:50:00 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.534 17:50:00 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.534 17:50:00 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.534 17:50:00 ftl -- scripts/common.sh@368 -- # return 0 00:15:46.534 17:50:00 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.535 17:50:00 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:46.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.535 --rc genhtml_branch_coverage=1 00:15:46.535 --rc genhtml_function_coverage=1 00:15:46.535 --rc genhtml_legend=1 00:15:46.535 --rc geninfo_all_blocks=1 00:15:46.535 --rc geninfo_unexecuted_blocks=1 00:15:46.535 00:15:46.535 ' 00:15:46.535 17:50:00 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:46.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.535 --rc genhtml_branch_coverage=1 00:15:46.535 --rc genhtml_function_coverage=1 00:15:46.535 --rc genhtml_legend=1 00:15:46.535 --rc geninfo_all_blocks=1 00:15:46.535 --rc geninfo_unexecuted_blocks=1 00:15:46.535 00:15:46.535 ' 00:15:46.535 17:50:00 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:46.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.535 --rc genhtml_branch_coverage=1 00:15:46.535 --rc genhtml_function_coverage=1 00:15:46.535 --rc genhtml_legend=1 00:15:46.535 --rc geninfo_all_blocks=1 00:15:46.535 --rc geninfo_unexecuted_blocks=1 00:15:46.535 00:15:46.535 ' 00:15:46.535 17:50:00 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:46.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.535 --rc genhtml_branch_coverage=1 00:15:46.535 --rc genhtml_function_coverage=1 00:15:46.535 --rc genhtml_legend=1 00:15:46.535 --rc geninfo_all_blocks=1 00:15:46.535 --rc geninfo_unexecuted_blocks=1 00:15:46.535 00:15:46.535 ' 00:15:46.535 17:50:00 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:46.535 17:50:00 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:46.535 17:50:00 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.535 17:50:00 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.535 17:50:00 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:46.535 17:50:00 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:46.535 17:50:00 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:46.535 17:50:00 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:46.535 17:50:00 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:46.535 17:50:00 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.535 17:50:00 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.535 17:50:00 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:46.535 17:50:00 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:46.535 17:50:00 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:46.535 17:50:00 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:46.535 17:50:00 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:46.535 17:50:00 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:46.535 17:50:00 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.535 17:50:00 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.535 17:50:00 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:46.535 17:50:00 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:46.535 17:50:00 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:46.535 17:50:00 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:46.535 17:50:00 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:46.535 17:50:00 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:46.535 17:50:00 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:46.535 17:50:00 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:46.535 17:50:00 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.535 17:50:00 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.535 17:50:00 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:46.535 17:50:00 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:46.535 17:50:00 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:46.535 17:50:00 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:46.535 17:50:00 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:46.535 17:50:00 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:46.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:46.535 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.535 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.535 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.535 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.535 17:50:00 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:46.535 17:50:00 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=84945 00:15:46.535 17:50:00 ftl -- ftl/ftl.sh@38 -- # waitforlisten 84945 00:15:46.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.535 17:50:00 ftl -- common/autotest_common.sh@833 -- # '[' -z 84945 ']' 00:15:46.535 17:50:00 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.535 17:50:00 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:46.535 17:50:00 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.535 17:50:00 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:46.535 17:50:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:46.535 [2024-11-05 17:50:00.875854] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:15:46.535 [2024-11-05 17:50:00.876008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84945 ] 00:15:46.535 [2024-11-05 17:50:01.010586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:46.535 [2024-11-05 17:50:01.041590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.535 [2024-11-05 17:50:01.073038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.535 17:50:01 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:46.535 17:50:01 ftl -- common/autotest_common.sh@866 -- # return 0 00:15:46.535 17:50:01 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:46.535 17:50:01 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:46.536 17:50:02 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:46.536 17:50:02 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:46.536 17:50:02 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:46.536 17:50:02 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:46.536 17:50:02 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@50 -- # break 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@63 -- # break 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@66 -- # killprocess 84945 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@952 -- # '[' -z 84945 ']' 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@956 -- # kill -0 84945 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@957 -- # uname 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84945 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:46.536 killing process with pid 84945 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84945' 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@971 -- # kill 84945 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@976 -- # wait 84945 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:46.536 17:50:03 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:46.536 17:50:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:46.536 ************************************ 00:15:46.536 START TEST ftl_fio_basic 00:15:46.536 ************************************ 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:46.536 * Looking for test storage... 00:15:46.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:46.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.536 --rc genhtml_branch_coverage=1 00:15:46.536 --rc genhtml_function_coverage=1 00:15:46.536 --rc genhtml_legend=1 00:15:46.536 --rc geninfo_all_blocks=1 00:15:46.536 --rc geninfo_unexecuted_blocks=1 00:15:46.536 00:15:46.536 ' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:46.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.536 --rc genhtml_branch_coverage=1 00:15:46.536 --rc genhtml_function_coverage=1 00:15:46.536 --rc genhtml_legend=1 00:15:46.536 --rc geninfo_all_blocks=1 00:15:46.536 --rc geninfo_unexecuted_blocks=1 00:15:46.536 00:15:46.536 ' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:46.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.536 --rc genhtml_branch_coverage=1 00:15:46.536 --rc genhtml_function_coverage=1 00:15:46.536 --rc genhtml_legend=1 00:15:46.536 --rc geninfo_all_blocks=1 00:15:46.536 --rc geninfo_unexecuted_blocks=1 00:15:46.536 00:15:46.536 ' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:46.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.536 --rc genhtml_branch_coverage=1 00:15:46.536 --rc genhtml_function_coverage=1 00:15:46.536 --rc genhtml_legend=1 00:15:46.536 --rc geninfo_all_blocks=1 00:15:46.536 --rc geninfo_unexecuted_blocks=1 00:15:46.536 00:15:46.536 ' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:46.536 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=85066 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 85066 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 85066 ']' 00:15:46.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:46.537 17:50:03 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:46.537 [2024-11-05 17:50:03.869521] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:15:46.537 [2024-11-05 17:50:03.869649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85066 ] 00:15:46.537 [2024-11-05 17:50:04.005535] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:46.537 [2024-11-05 17:50:04.034939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:46.537 [2024-11-05 17:50:04.079716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.537 [2024-11-05 17:50:04.080027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.537 [2024-11-05 17:50:04.080045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.537 17:50:04 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:46.537 17:50:04 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:15:46.537 17:50:04 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:46.537 17:50:04 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:46.537 17:50:04 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:46.537 17:50:04 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:46.537 17:50:04 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:46.537 17:50:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:46.537 { 00:15:46.537 "name": "nvme0n1", 00:15:46.537 "aliases": [ 00:15:46.537 "11849548-40a0-4f2a-b812-f0ab7de7cd3a" 00:15:46.537 ], 00:15:46.537 "product_name": "NVMe disk", 00:15:46.537 "block_size": 4096, 00:15:46.537 "num_blocks": 1310720, 00:15:46.537 "uuid": "11849548-40a0-4f2a-b812-f0ab7de7cd3a", 00:15:46.537 "numa_id": -1, 00:15:46.537 "assigned_rate_limits": { 00:15:46.537 "rw_ios_per_sec": 0, 00:15:46.537 "rw_mbytes_per_sec": 0, 00:15:46.537 "r_mbytes_per_sec": 0, 00:15:46.537 "w_mbytes_per_sec": 0 00:15:46.537 }, 00:15:46.537 "claimed": false, 00:15:46.537 "zoned": false, 00:15:46.537 "supported_io_types": { 00:15:46.537 "read": true, 00:15:46.537 "write": true, 00:15:46.537 "unmap": true, 00:15:46.537 "flush": true, 00:15:46.537 "reset": true, 00:15:46.537 "nvme_admin": true, 00:15:46.537 "nvme_io": true, 00:15:46.537 "nvme_io_md": false, 00:15:46.537 "write_zeroes": true, 00:15:46.537 "zcopy": false, 00:15:46.537 "get_zone_info": false, 00:15:46.537 "zone_management": false, 00:15:46.537 "zone_append": false, 00:15:46.537 "compare": true, 00:15:46.537 "compare_and_write": false, 00:15:46.537 "abort": true, 00:15:46.537 "seek_hole": false, 00:15:46.537 "seek_data": false, 00:15:46.537 "copy": true, 00:15:46.537 "nvme_iov_md": false 00:15:46.537 }, 00:15:46.537 "driver_specific": { 00:15:46.537 "nvme": [ 00:15:46.537 { 00:15:46.537 "pci_address": "0000:00:11.0", 00:15:46.537 "trid": { 00:15:46.537 "trtype": "PCIe", 00:15:46.537 "traddr": "0000:00:11.0" 00:15:46.537 }, 00:15:46.537 "ctrlr_data": { 00:15:46.537 "cntlid": 0, 00:15:46.537 "vendor_id": "0x1b36", 00:15:46.537 "model_number": "QEMU NVMe Ctrl", 00:15:46.537 "serial_number": "12341", 00:15:46.537 "firmware_revision": "8.0.0", 00:15:46.537 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:46.537 "oacs": { 00:15:46.537 "security": 0, 00:15:46.537 "format": 1, 00:15:46.537 "firmware": 0, 00:15:46.537 "ns_manage": 1 00:15:46.537 }, 00:15:46.537 "multi_ctrlr": false, 00:15:46.537 "ana_reporting": false 00:15:46.537 }, 00:15:46.537 "vs": { 00:15:46.537 "nvme_version": "1.4" 00:15:46.537 }, 00:15:46.537 "ns_data": { 00:15:46.537 "id": 1, 00:15:46.537 "can_share": false 00:15:46.537 } 00:15:46.537 } 00:15:46.537 ], 00:15:46.537 "mp_policy": "active_passive" 00:15:46.537 } 00:15:46.537 } 00:15:46.537 ]' 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=b9494dc1-b136-4097-89b6-edad4a883edb 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b9494dc1-b136-4097-89b6-edad4a883edb 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:46.537 17:50:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:46.537 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:46.537 { 00:15:46.537 "name": "b3695211-ac81-47d4-88aa-aafbb1b5331b", 00:15:46.537 "aliases": [ 00:15:46.537 "lvs/nvme0n1p0" 00:15:46.537 ], 00:15:46.537 "product_name": "Logical Volume", 00:15:46.537 "block_size": 4096, 00:15:46.537 "num_blocks": 26476544, 00:15:46.537 "uuid": "b3695211-ac81-47d4-88aa-aafbb1b5331b", 00:15:46.537 "assigned_rate_limits": { 00:15:46.537 "rw_ios_per_sec": 0, 00:15:46.537 "rw_mbytes_per_sec": 0, 00:15:46.537 "r_mbytes_per_sec": 0, 00:15:46.537 "w_mbytes_per_sec": 0 00:15:46.537 }, 00:15:46.537 "claimed": false, 00:15:46.537 "zoned": false, 00:15:46.537 "supported_io_types": { 00:15:46.537 "read": true, 00:15:46.537 "write": true, 00:15:46.537 "unmap": true, 00:15:46.537 "flush": false, 00:15:46.537 "reset": true, 00:15:46.537 "nvme_admin": false, 00:15:46.537 "nvme_io": false, 00:15:46.537 "nvme_io_md": false, 00:15:46.537 "write_zeroes": true, 00:15:46.537 "zcopy": false, 00:15:46.537 "get_zone_info": false, 00:15:46.538 "zone_management": false, 00:15:46.538 "zone_append": false, 00:15:46.538 "compare": false, 00:15:46.538 "compare_and_write": false, 00:15:46.538 "abort": false, 00:15:46.538 "seek_hole": true, 00:15:46.538 "seek_data": true, 00:15:46.538 "copy": false, 00:15:46.538 "nvme_iov_md": false 00:15:46.538 }, 00:15:46.538 "driver_specific": { 00:15:46.538 "lvol": { 00:15:46.538 "lvol_store_uuid": "b9494dc1-b136-4097-89b6-edad4a883edb", 00:15:46.538 "base_bdev": "nvme0n1", 00:15:46.538 "thin_provision": true, 00:15:46.538 "num_allocated_clusters": 0, 00:15:46.538 "snapshot": false, 00:15:46.538 "clone": false, 00:15:46.538 "esnap_clone": false 00:15:46.538 } 00:15:46.538 } 00:15:46.538 } 00:15:46.538 ]' 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:46.538 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:46.797 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:46.797 { 00:15:46.797 "name": "b3695211-ac81-47d4-88aa-aafbb1b5331b", 00:15:46.797 "aliases": [ 00:15:46.797 "lvs/nvme0n1p0" 00:15:46.797 ], 00:15:46.797 "product_name": "Logical Volume", 00:15:46.797 "block_size": 4096, 00:15:46.797 "num_blocks": 26476544, 00:15:46.797 "uuid": "b3695211-ac81-47d4-88aa-aafbb1b5331b", 00:15:46.797 "assigned_rate_limits": { 00:15:46.797 "rw_ios_per_sec": 0, 00:15:46.797 "rw_mbytes_per_sec": 0, 00:15:46.797 "r_mbytes_per_sec": 0, 00:15:46.797 "w_mbytes_per_sec": 0 00:15:46.797 }, 00:15:46.797 "claimed": false, 00:15:46.797 "zoned": false, 00:15:46.797 "supported_io_types": { 00:15:46.797 "read": true, 00:15:46.797 "write": true, 00:15:46.797 "unmap": true, 00:15:46.797 "flush": false, 00:15:46.797 "reset": true, 00:15:46.797 "nvme_admin": false, 00:15:46.797 "nvme_io": false, 00:15:46.797 "nvme_io_md": false, 00:15:46.797 "write_zeroes": true, 00:15:46.797 "zcopy": false, 00:15:46.797 "get_zone_info": false, 00:15:46.797 "zone_management": false, 00:15:46.797 "zone_append": false, 00:15:46.797 "compare": false, 00:15:46.797 "compare_and_write": false, 00:15:46.797 "abort": false, 00:15:46.797 "seek_hole": true, 00:15:46.797 "seek_data": true, 00:15:46.797 "copy": false, 00:15:46.797 "nvme_iov_md": false 00:15:46.797 }, 00:15:46.797 "driver_specific": { 00:15:46.797 "lvol": { 00:15:46.797 "lvol_store_uuid": "b9494dc1-b136-4097-89b6-edad4a883edb", 00:15:46.797 "base_bdev": "nvme0n1", 00:15:46.797 "thin_provision": true, 00:15:46.797 "num_allocated_clusters": 0, 00:15:46.797 "snapshot": false, 00:15:46.797 "clone": false, 00:15:46.797 "esnap_clone": false 00:15:46.797 } 00:15:46.797 } 00:15:46.797 } 00:15:46.797 ]' 00:15:46.797 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:46.797 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:46.797 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:46.797 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:46.797 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:46.797 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:46.797 17:50:06 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:46.797 17:50:06 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:47.055 17:50:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:47.055 17:50:06 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:47.055 17:50:06 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:47.055 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:47.055 17:50:06 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:47.055 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:47.055 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:47.055 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:47.055 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:47.055 17:50:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b3695211-ac81-47d4-88aa-aafbb1b5331b 00:15:47.314 17:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:47.314 { 00:15:47.314 "name": "b3695211-ac81-47d4-88aa-aafbb1b5331b", 00:15:47.314 "aliases": [ 00:15:47.314 "lvs/nvme0n1p0" 00:15:47.314 ], 00:15:47.314 "product_name": "Logical Volume", 00:15:47.314 "block_size": 4096, 00:15:47.314 "num_blocks": 26476544, 00:15:47.314 "uuid": "b3695211-ac81-47d4-88aa-aafbb1b5331b", 00:15:47.314 "assigned_rate_limits": { 00:15:47.314 "rw_ios_per_sec": 0, 00:15:47.314 "rw_mbytes_per_sec": 0, 00:15:47.314 "r_mbytes_per_sec": 0, 00:15:47.314 "w_mbytes_per_sec": 0 00:15:47.314 }, 00:15:47.314 "claimed": false, 00:15:47.314 "zoned": false, 00:15:47.314 "supported_io_types": { 00:15:47.314 "read": true, 00:15:47.314 "write": true, 00:15:47.314 "unmap": true, 00:15:47.314 "flush": false, 00:15:47.314 "reset": true, 00:15:47.314 "nvme_admin": false, 00:15:47.314 "nvme_io": false, 00:15:47.314 "nvme_io_md": false, 00:15:47.314 "write_zeroes": true, 00:15:47.314 "zcopy": false, 00:15:47.314 "get_zone_info": false, 00:15:47.314 "zone_management": false, 00:15:47.314 "zone_append": false, 00:15:47.314 "compare": false, 00:15:47.314 "compare_and_write": false, 00:15:47.314 "abort": false, 00:15:47.314 "seek_hole": true, 00:15:47.314 "seek_data": true, 00:15:47.314 "copy": false, 00:15:47.314 "nvme_iov_md": false 00:15:47.314 }, 00:15:47.314 "driver_specific": { 00:15:47.314 "lvol": { 00:15:47.314 "lvol_store_uuid": "b9494dc1-b136-4097-89b6-edad4a883edb", 00:15:47.314 "base_bdev": "nvme0n1", 00:15:47.314 "thin_provision": true, 00:15:47.314 "num_allocated_clusters": 0, 00:15:47.314 "snapshot": false, 00:15:47.314 "clone": false, 00:15:47.314 "esnap_clone": false 00:15:47.314 } 00:15:47.314 } 00:15:47.314 } 00:15:47.314 ]' 00:15:47.314 17:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:47.314 17:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:47.314 17:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:47.314 17:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:47.314 17:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:47.314 17:50:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:47.314 17:50:07 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:47.314 17:50:07 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:47.314 17:50:07 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b3695211-ac81-47d4-88aa-aafbb1b5331b -c nvc0n1p0 --l2p_dram_limit 60 00:15:47.573 [2024-11-05 17:50:07.325054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.573 [2024-11-05 17:50:07.325128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:47.573 [2024-11-05 17:50:07.325145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:47.573 [2024-11-05 17:50:07.325154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.573 [2024-11-05 17:50:07.325236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.573 [2024-11-05 17:50:07.325246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:47.573 [2024-11-05 17:50:07.325259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:15:47.573 [2024-11-05 17:50:07.325268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.573 [2024-11-05 17:50:07.325316] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:47.573 [2024-11-05 17:50:07.325590] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:47.573 [2024-11-05 17:50:07.325607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.573 [2024-11-05 17:50:07.325615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:47.574 [2024-11-05 17:50:07.325626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:15:47.574 [2024-11-05 17:50:07.325633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.574 [2024-11-05 17:50:07.325721] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 338e29f3-c8eb-4323-bd91-f9a2bfa3b8fb 00:15:47.574 [2024-11-05 17:50:07.327141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.574 [2024-11-05 17:50:07.327176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:47.574 [2024-11-05 17:50:07.327187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:47.574 [2024-11-05 17:50:07.327197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.574 [2024-11-05 17:50:07.334246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.574 [2024-11-05 17:50:07.334280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:47.574 [2024-11-05 17:50:07.334306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.970 ms 00:15:47.574 [2024-11-05 17:50:07.334318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.574 [2024-11-05 17:50:07.334404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.574 [2024-11-05 17:50:07.334415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:47.574 [2024-11-05 17:50:07.334432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:15:47.574 [2024-11-05 17:50:07.334442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.574 [2024-11-05 17:50:07.334494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.574 [2024-11-05 17:50:07.334506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:47.574 [2024-11-05 17:50:07.334514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:47.574 [2024-11-05 17:50:07.334523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.574 [2024-11-05 17:50:07.334554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:47.574 [2024-11-05 17:50:07.336359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.574 [2024-11-05 17:50:07.336535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:47.574 [2024-11-05 17:50:07.336556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.809 ms 00:15:47.574 [2024-11-05 17:50:07.336565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.574 [2024-11-05 17:50:07.336611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.574 [2024-11-05 17:50:07.336619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:47.574 [2024-11-05 17:50:07.336631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:47.574 [2024-11-05 17:50:07.336639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.574 [2024-11-05 17:50:07.336685] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:47.574 [2024-11-05 17:50:07.336842] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:47.574 [2024-11-05 17:50:07.336856] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:47.574 [2024-11-05 17:50:07.336867] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:47.574 [2024-11-05 17:50:07.336883] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:47.574 [2024-11-05 17:50:07.336892] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:47.574 [2024-11-05 17:50:07.336902] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:47.574 [2024-11-05 17:50:07.336910] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:47.574 [2024-11-05 17:50:07.336930] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:47.574 [2024-11-05 17:50:07.336937] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:47.574 [2024-11-05 17:50:07.336948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.574 [2024-11-05 17:50:07.336955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:47.574 [2024-11-05 17:50:07.336964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:15:47.574 [2024-11-05 17:50:07.336972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.574 [2024-11-05 17:50:07.337089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.574 [2024-11-05 17:50:07.337099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:47.574 [2024-11-05 17:50:07.337119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:15:47.574 [2024-11-05 17:50:07.337126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.574 [2024-11-05 17:50:07.337255] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:47.574 [2024-11-05 17:50:07.337265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:47.574 [2024-11-05 17:50:07.337275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:47.574 [2024-11-05 17:50:07.337283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:47.574 [2024-11-05 17:50:07.337299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:47.574 [2024-11-05 17:50:07.337318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:47.574 [2024-11-05 17:50:07.337327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:47.574 [2024-11-05 17:50:07.337348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:47.574 [2024-11-05 17:50:07.337355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:47.574 [2024-11-05 17:50:07.337365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:47.574 [2024-11-05 17:50:07.337378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:47.574 [2024-11-05 17:50:07.337386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:47.574 [2024-11-05 17:50:07.337393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:47.574 [2024-11-05 17:50:07.337408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:47.574 [2024-11-05 17:50:07.337416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:47.574 [2024-11-05 17:50:07.337443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:47.574 [2024-11-05 17:50:07.337458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:47.574 [2024-11-05 17:50:07.337464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:47.574 [2024-11-05 17:50:07.337479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:47.574 [2024-11-05 17:50:07.337488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:47.574 [2024-11-05 17:50:07.337505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:47.574 [2024-11-05 17:50:07.337511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:47.574 [2024-11-05 17:50:07.337527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:47.574 [2024-11-05 17:50:07.337535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:47.574 [2024-11-05 17:50:07.337554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:47.574 [2024-11-05 17:50:07.337561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:47.574 [2024-11-05 17:50:07.337569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:47.574 [2024-11-05 17:50:07.337576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:47.574 [2024-11-05 17:50:07.337585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:47.574 [2024-11-05 17:50:07.337591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:47.574 [2024-11-05 17:50:07.337606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:47.574 [2024-11-05 17:50:07.337614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337621] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:47.574 [2024-11-05 17:50:07.337633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:47.574 [2024-11-05 17:50:07.337641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:47.574 [2024-11-05 17:50:07.337652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:47.574 [2024-11-05 17:50:07.337660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:47.574 [2024-11-05 17:50:07.337669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:47.574 [2024-11-05 17:50:07.337676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:47.574 [2024-11-05 17:50:07.337685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:47.574 [2024-11-05 17:50:07.337691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:47.574 [2024-11-05 17:50:07.337699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:47.574 [2024-11-05 17:50:07.337709] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:47.574 [2024-11-05 17:50:07.337722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:47.574 [2024-11-05 17:50:07.337730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:47.574 [2024-11-05 17:50:07.337739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:47.575 [2024-11-05 17:50:07.337746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:47.575 [2024-11-05 17:50:07.337755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:47.575 [2024-11-05 17:50:07.337762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:47.575 [2024-11-05 17:50:07.337771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:47.575 [2024-11-05 17:50:07.337778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:47.575 [2024-11-05 17:50:07.337787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:47.575 [2024-11-05 17:50:07.337794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:47.575 [2024-11-05 17:50:07.337803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:47.575 [2024-11-05 17:50:07.337809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:47.575 [2024-11-05 17:50:07.337821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:47.575 [2024-11-05 17:50:07.337828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:47.575 [2024-11-05 17:50:07.337837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:47.575 [2024-11-05 17:50:07.337844] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:47.575 [2024-11-05 17:50:07.337855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:47.575 [2024-11-05 17:50:07.337863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:47.575 [2024-11-05 17:50:07.337872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:47.575 [2024-11-05 17:50:07.337879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:47.575 [2024-11-05 17:50:07.337889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:47.575 [2024-11-05 17:50:07.337897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.575 [2024-11-05 17:50:07.337908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:47.575 [2024-11-05 17:50:07.337915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:15:47.575 [2024-11-05 17:50:07.337924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.575 [2024-11-05 17:50:07.337988] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:47.575 [2024-11-05 17:50:07.338000] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:50.143 [2024-11-05 17:50:10.133583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.143 [2024-11-05 17:50:10.133660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:50.143 [2024-11-05 17:50:10.133676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2795.583 ms 00:15:50.143 [2024-11-05 17:50:10.133688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.401 [2024-11-05 17:50:10.144570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.401 [2024-11-05 17:50:10.144628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:50.401 [2024-11-05 17:50:10.144647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.786 ms 00:15:50.401 [2024-11-05 17:50:10.144661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.401 [2024-11-05 17:50:10.144798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.401 [2024-11-05 17:50:10.144813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:50.401 [2024-11-05 17:50:10.144835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:15:50.401 [2024-11-05 17:50:10.144847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.401 [2024-11-05 17:50:10.168982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.401 [2024-11-05 17:50:10.169052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:50.401 [2024-11-05 17:50:10.169097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.081 ms 00:15:50.401 [2024-11-05 17:50:10.169116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.401 [2024-11-05 17:50:10.169185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.401 [2024-11-05 17:50:10.169207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:50.401 [2024-11-05 17:50:10.169225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:50.401 [2024-11-05 17:50:10.169245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.401 [2024-11-05 17:50:10.169764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.401 [2024-11-05 17:50:10.169803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:50.401 [2024-11-05 17:50:10.169824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:15:50.401 [2024-11-05 17:50:10.169844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.170027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.170049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:50.402 [2024-11-05 17:50:10.170059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:15:50.402 [2024-11-05 17:50:10.170092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.177953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.178022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:50.402 [2024-11-05 17:50:10.178041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.802 ms 00:15:50.402 [2024-11-05 17:50:10.178090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.187248] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:50.402 [2024-11-05 17:50:10.204706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.204756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:50.402 [2024-11-05 17:50:10.204773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.494 ms 00:15:50.402 [2024-11-05 17:50:10.204782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.245396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.245458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:50.402 [2024-11-05 17:50:10.245476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.537 ms 00:15:50.402 [2024-11-05 17:50:10.245485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.245684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.245695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:50.402 [2024-11-05 17:50:10.245706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:15:50.402 [2024-11-05 17:50:10.245713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.249423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.249598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:50.402 [2024-11-05 17:50:10.249620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.654 ms 00:15:50.402 [2024-11-05 17:50:10.249642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.252138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.252171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:50.402 [2024-11-05 17:50:10.252185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:15:50.402 [2024-11-05 17:50:10.252193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.252532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.252547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:50.402 [2024-11-05 17:50:10.252560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:15:50.402 [2024-11-05 17:50:10.252567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.278405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.278447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:50.402 [2024-11-05 17:50:10.278460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.808 ms 00:15:50.402 [2024-11-05 17:50:10.278468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.282647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.282680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:50.402 [2024-11-05 17:50:10.282693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.111 ms 00:15:50.402 [2024-11-05 17:50:10.282700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.285521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.285551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:50.402 [2024-11-05 17:50:10.285563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.781 ms 00:15:50.402 [2024-11-05 17:50:10.285570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.288797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.288944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:50.402 [2024-11-05 17:50:10.288965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.182 ms 00:15:50.402 [2024-11-05 17:50:10.288973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.289016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.289037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:50.402 [2024-11-05 17:50:10.289058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:50.402 [2024-11-05 17:50:10.289082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.289174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:50.402 [2024-11-05 17:50:10.289186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:50.402 [2024-11-05 17:50:10.289197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:15:50.402 [2024-11-05 17:50:10.289216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:50.402 [2024-11-05 17:50:10.290278] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2964.748 ms, result 0 00:15:50.402 { 00:15:50.402 "name": "ftl0", 00:15:50.402 "uuid": "338e29f3-c8eb-4323-bd91-f9a2bfa3b8fb" 00:15:50.402 } 00:15:50.402 17:50:10 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:50.402 17:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:15:50.402 17:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:50.402 17:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:15:50.402 17:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:50.402 17:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:50.402 17:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:50.660 17:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:50.919 [ 00:15:50.919 { 00:15:50.919 "name": "ftl0", 00:15:50.919 "aliases": [ 00:15:50.919 "338e29f3-c8eb-4323-bd91-f9a2bfa3b8fb" 00:15:50.919 ], 00:15:50.919 "product_name": "FTL disk", 00:15:50.919 "block_size": 4096, 00:15:50.919 "num_blocks": 20971520, 00:15:50.919 "uuid": "338e29f3-c8eb-4323-bd91-f9a2bfa3b8fb", 00:15:50.919 "assigned_rate_limits": { 00:15:50.919 "rw_ios_per_sec": 0, 00:15:50.919 "rw_mbytes_per_sec": 0, 00:15:50.919 "r_mbytes_per_sec": 0, 00:15:50.919 "w_mbytes_per_sec": 0 00:15:50.919 }, 00:15:50.919 "claimed": false, 00:15:50.919 "zoned": false, 00:15:50.919 "supported_io_types": { 00:15:50.919 "read": true, 00:15:50.919 "write": true, 00:15:50.919 "unmap": true, 00:15:50.919 "flush": true, 00:15:50.919 "reset": false, 00:15:50.919 "nvme_admin": false, 00:15:50.919 "nvme_io": false, 00:15:50.919 "nvme_io_md": false, 00:15:50.919 "write_zeroes": true, 00:15:50.919 "zcopy": false, 00:15:50.919 "get_zone_info": false, 00:15:50.919 "zone_management": false, 00:15:50.919 "zone_append": false, 00:15:50.919 "compare": false, 00:15:50.919 "compare_and_write": false, 00:15:50.919 "abort": false, 00:15:50.919 "seek_hole": false, 00:15:50.919 "seek_data": false, 00:15:50.919 "copy": false, 00:15:50.919 "nvme_iov_md": false 00:15:50.919 }, 00:15:50.919 "driver_specific": { 00:15:50.919 "ftl": { 00:15:50.919 "base_bdev": "b3695211-ac81-47d4-88aa-aafbb1b5331b", 00:15:50.919 "cache": "nvc0n1p0" 00:15:50.919 } 00:15:50.919 } 00:15:50.919 } 00:15:50.919 ] 00:15:50.919 17:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:15:50.919 17:50:10 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:50.919 17:50:10 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:51.179 17:50:10 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:51.179 17:50:10 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:51.179 [2024-11-05 17:50:11.095203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.095261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:51.180 [2024-11-05 17:50:11.095276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:51.180 [2024-11-05 17:50:11.095287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.095318] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:51.180 [2024-11-05 17:50:11.095895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.095917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:51.180 [2024-11-05 17:50:11.095933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:15:51.180 [2024-11-05 17:50:11.095942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.096418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.096551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:51.180 [2024-11-05 17:50:11.096568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:15:51.180 [2024-11-05 17:50:11.096577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.099626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.099703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:51.180 [2024-11-05 17:50:11.099715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.003 ms 00:15:51.180 [2024-11-05 17:50:11.099733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.104381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.104404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:51.180 [2024-11-05 17:50:11.104431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.619 ms 00:15:51.180 [2024-11-05 17:50:11.104437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.106028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.106055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:51.180 [2024-11-05 17:50:11.106072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.517 ms 00:15:51.180 [2024-11-05 17:50:11.106078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.110648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.110678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:51.180 [2024-11-05 17:50:11.110690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.535 ms 00:15:51.180 [2024-11-05 17:50:11.110696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.110837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.110845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:51.180 [2024-11-05 17:50:11.110854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:15:51.180 [2024-11-05 17:50:11.110859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.112327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.112353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:51.180 [2024-11-05 17:50:11.112362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:15:51.180 [2024-11-05 17:50:11.112368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.113468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.113563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:51.180 [2024-11-05 17:50:11.113579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:15:51.180 [2024-11-05 17:50:11.113584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.115593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.115704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:51.180 [2024-11-05 17:50:11.115745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.952 ms 00:15:51.180 [2024-11-05 17:50:11.115769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.117770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.180 [2024-11-05 17:50:11.117851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:51.180 [2024-11-05 17:50:11.117883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.785 ms 00:15:51.180 [2024-11-05 17:50:11.117904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.180 [2024-11-05 17:50:11.117996] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:51.180 [2024-11-05 17:50:11.118036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.118988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:51.180 [2024-11-05 17:50:11.119376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.119992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:51.181 [2024-11-05 17:50:11.120794] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:51.181 [2024-11-05 17:50:11.120828] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 338e29f3-c8eb-4323-bd91-f9a2bfa3b8fb 00:15:51.181 [2024-11-05 17:50:11.120851] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:51.181 [2024-11-05 17:50:11.120877] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:51.181 [2024-11-05 17:50:11.120898] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:51.181 [2024-11-05 17:50:11.120925] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:51.181 [2024-11-05 17:50:11.120946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:51.181 [2024-11-05 17:50:11.120972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:51.181 [2024-11-05 17:50:11.120994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:51.181 [2024-11-05 17:50:11.121017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:51.181 [2024-11-05 17:50:11.121037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:51.181 [2024-11-05 17:50:11.121083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.181 [2024-11-05 17:50:11.121109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:51.181 [2024-11-05 17:50:11.121137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.069 ms 00:15:51.181 [2024-11-05 17:50:11.121159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.181 [2024-11-05 17:50:11.124573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.181 [2024-11-05 17:50:11.124885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:51.181 [2024-11-05 17:50:11.124944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:15:51.181 [2024-11-05 17:50:11.124971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.181 [2024-11-05 17:50:11.125275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.181 [2024-11-05 17:50:11.125323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:51.181 [2024-11-05 17:50:11.125359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:15:51.181 [2024-11-05 17:50:11.125398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.181 [2024-11-05 17:50:11.132755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.181 [2024-11-05 17:50:11.132788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:51.181 [2024-11-05 17:50:11.132800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.181 [2024-11-05 17:50:11.132809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.181 [2024-11-05 17:50:11.132875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.181 [2024-11-05 17:50:11.132894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:51.181 [2024-11-05 17:50:11.132905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.181 [2024-11-05 17:50:11.132914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.181 [2024-11-05 17:50:11.132999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.181 [2024-11-05 17:50:11.133013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:51.181 [2024-11-05 17:50:11.133023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.181 [2024-11-05 17:50:11.133030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.181 [2024-11-05 17:50:11.133060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.181 [2024-11-05 17:50:11.133095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:51.181 [2024-11-05 17:50:11.133119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.181 [2024-11-05 17:50:11.133126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.181 [2024-11-05 17:50:11.144870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.181 [2024-11-05 17:50:11.144910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:51.181 [2024-11-05 17:50:11.144934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.181 [2024-11-05 17:50:11.144941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.181 [2024-11-05 17:50:11.154607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.181 [2024-11-05 17:50:11.154647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:51.181 [2024-11-05 17:50:11.154661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.181 [2024-11-05 17:50:11.154672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.181 [2024-11-05 17:50:11.154762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.181 [2024-11-05 17:50:11.154772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:51.182 [2024-11-05 17:50:11.154782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.182 [2024-11-05 17:50:11.154789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.182 [2024-11-05 17:50:11.154850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.182 [2024-11-05 17:50:11.154860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:51.182 [2024-11-05 17:50:11.154870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.182 [2024-11-05 17:50:11.154877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.182 [2024-11-05 17:50:11.154963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.182 [2024-11-05 17:50:11.154973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:51.182 [2024-11-05 17:50:11.154982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.182 [2024-11-05 17:50:11.154990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.182 [2024-11-05 17:50:11.155036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.182 [2024-11-05 17:50:11.155045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:51.182 [2024-11-05 17:50:11.155055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.182 [2024-11-05 17:50:11.155078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.182 [2024-11-05 17:50:11.155148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.182 [2024-11-05 17:50:11.155158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:51.182 [2024-11-05 17:50:11.155167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.182 [2024-11-05 17:50:11.155175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.182 [2024-11-05 17:50:11.155238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:51.182 [2024-11-05 17:50:11.155296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:51.182 [2024-11-05 17:50:11.155309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:51.182 [2024-11-05 17:50:11.155317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.182 [2024-11-05 17:50:11.155502] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.258 ms, result 0 00:15:51.182 true 00:15:51.182 17:50:11 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 85066 00:15:51.182 17:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 85066 ']' 00:15:51.182 17:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 85066 00:15:51.440 17:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:15:51.440 17:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:51.440 17:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85066 00:15:51.440 killing process with pid 85066 00:15:51.440 17:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:51.440 17:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:51.440 17:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85066' 00:15:51.440 17:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 85066 00:15:51.440 17:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 85066 00:15:59.551 17:50:19 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:59.551 17:50:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:59.551 17:50:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:59.551 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:59.551 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:59.552 17:50:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:59.552 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:59.552 fio-3.35 00:15:59.552 Starting 1 thread 00:16:06.149 00:16:06.149 test: (groupid=0, jobs=1): err= 0: pid=85238: Tue Nov 5 17:50:25 2024 00:16:06.149 read: IOPS=738, BW=49.0MiB/s (51.4MB/s)(255MiB/5190msec) 00:16:06.149 slat (nsec): min=2951, max=36495, avg=7302.68, stdev=4197.43 00:16:06.149 clat (usec): min=257, max=8285, avg=616.82, stdev=332.53 00:16:06.149 lat (usec): min=263, max=8297, avg=624.12, stdev=334.94 00:16:06.149 clat percentiles (usec): 00:16:06.149 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 330], 00:16:06.149 | 30.00th=[ 338], 40.00th=[ 392], 50.00th=[ 469], 60.00th=[ 611], 00:16:06.149 | 70.00th=[ 848], 80.00th=[ 979], 90.00th=[ 1074], 95.00th=[ 1123], 00:16:06.149 | 99.00th=[ 1270], 99.50th=[ 1319], 99.90th=[ 1483], 99.95th=[ 1582], 00:16:06.149 | 99.99th=[ 8291] 00:16:06.149 write: IOPS=743, BW=49.4MiB/s (51.8MB/s)(256MiB/5187msec); 0 zone resets 00:16:06.149 slat (nsec): min=13750, max=97198, avg=22697.02, stdev=5929.76 00:16:06.149 clat (usec): min=287, max=2519, avg=689.04, stdev=361.10 00:16:06.149 lat (usec): min=303, max=2556, avg=711.73, stdev=364.17 00:16:06.149 clat percentiles (usec): 00:16:06.149 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:16:06.149 | 30.00th=[ 363], 40.00th=[ 457], 50.00th=[ 586], 60.00th=[ 701], 00:16:06.149 | 70.00th=[ 922], 80.00th=[ 1074], 90.00th=[ 1156], 95.00th=[ 1237], 00:16:06.149 | 99.00th=[ 1860], 99.50th=[ 2040], 99.90th=[ 2212], 99.95th=[ 2278], 00:16:06.149 | 99.99th=[ 2507] 00:16:06.149 bw ( KiB/s): min=28696, max=76840, per=97.39%, avg=49232.00, stdev=19333.17, samples=10 00:16:06.149 iops : min= 422, max= 1130, avg=724.00, stdev=284.31, samples=10 00:16:06.149 lat (usec) : 500=48.80%, 750=15.76%, 1000=13.81% 00:16:06.149 lat (msec) : 2=21.26%, 4=0.35%, 10=0.01% 00:16:06.149 cpu : usr=99.07%, sys=0.08%, ctx=14, majf=0, minf=1181 00:16:06.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.149 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.149 00:16:06.149 Run status group 0 (all jobs): 00:16:06.149 READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=255MiB (267MB), run=5190-5190msec 00:16:06.149 WRITE: bw=49.4MiB/s (51.8MB/s), 49.4MiB/s-49.4MiB/s (51.8MB/s-51.8MB/s), io=256MiB (269MB), run=5187-5187msec 00:16:06.411 ----------------------------------------------------- 00:16:06.411 Suppressions used: 00:16:06.411 count bytes template 00:16:06.411 1 5 /usr/src/fio/parse.c 00:16:06.411 1 8 libtcmalloc_minimal.so 00:16:06.411 1 904 libcrypto.so 00:16:06.411 ----------------------------------------------------- 00:16:06.411 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:06.411 17:50:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:06.672 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:06.672 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:06.672 fio-3.35 00:16:06.672 Starting 2 threads 00:16:33.251 00:16:33.251 first_half: (groupid=0, jobs=1): err= 0: pid=85336: Tue Nov 5 17:50:51 2024 00:16:33.251 read: IOPS=2704, BW=10.6MiB/s (11.1MB/s)(255MiB/24124msec) 00:16:33.251 slat (nsec): min=3023, max=26387, avg=4656.27, stdev=1129.59 00:16:33.251 clat (usec): min=724, max=322698, avg=36896.48, stdev=21621.31 00:16:33.251 lat (usec): min=729, max=322703, avg=36901.13, stdev=21621.39 00:16:33.251 clat percentiles (msec): 00:16:33.251 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 31], 00:16:33.251 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:16:33.251 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 62], 00:16:33.251 | 99.00th=[ 150], 99.50th=[ 180], 99.90th=[ 215], 99.95th=[ 239], 00:16:33.251 | 99.99th=[ 305] 00:16:33.251 write: IOPS=3264, BW=12.8MiB/s (13.4MB/s)(256MiB/20073msec); 0 zone resets 00:16:33.251 slat (usec): min=3, max=1399, avg= 6.40, stdev= 7.94 00:16:33.251 clat (usec): min=377, max=124568, avg=10366.99, stdev=18581.93 00:16:33.251 lat (usec): min=387, max=124573, avg=10373.39, stdev=18582.00 00:16:33.251 clat percentiles (usec): 00:16:33.251 | 1.00th=[ 742], 5.00th=[ 979], 10.00th=[ 1172], 20.00th=[ 1745], 00:16:33.251 | 30.00th=[ 2769], 40.00th=[ 3818], 50.00th=[ 5211], 60.00th=[ 6259], 00:16:33.251 | 70.00th=[ 7439], 80.00th=[ 10552], 90.00th=[ 17433], 95.00th=[ 62653], 00:16:33.251 | 99.00th=[108528], 99.50th=[111674], 99.90th=[119014], 99.95th=[121111], 00:16:33.251 | 99.99th=[122160] 00:16:33.251 bw ( KiB/s): min= 248, max=45416, per=97.52%, avg=21844.79, stdev=15394.92, samples=24 00:16:33.251 iops : min= 62, max=11354, avg=5461.17, stdev=3848.77, samples=24 00:16:33.251 lat (usec) : 500=0.02%, 750=0.55%, 1000=2.15% 00:16:33.251 lat (msec) : 2=8.86%, 4=9.63%, 10=18.76%, 20=6.96%, 50=46.74% 00:16:33.251 lat (msec) : 100=4.22%, 250=2.09%, 500=0.02% 00:16:33.251 cpu : usr=99.38%, sys=0.13%, ctx=48, majf=0, minf=5603 00:16:33.251 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:33.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.251 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.251 issued rwts: total=65235,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.251 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.251 second_half: (groupid=0, jobs=1): err= 0: pid=85337: Tue Nov 5 17:50:51 2024 00:16:33.251 read: IOPS=2689, BW=10.5MiB/s (11.0MB/s)(255MiB/24302msec) 00:16:33.251 slat (nsec): min=3057, max=50045, avg=5067.75, stdev=1109.15 00:16:33.251 clat (usec): min=709, max=357738, avg=36068.42, stdev=23707.74 00:16:33.251 lat (usec): min=714, max=357743, avg=36073.49, stdev=23707.78 00:16:33.251 clat percentiles (msec): 00:16:33.251 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 31], 00:16:33.251 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:16:33.251 | 70.00th=[ 33], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 56], 00:16:33.251 | 99.00th=[ 169], 99.50th=[ 194], 99.90th=[ 224], 99.95th=[ 326], 00:16:33.251 | 99.99th=[ 355] 00:16:33.251 write: IOPS=2799, BW=10.9MiB/s (11.5MB/s)(256MiB/23406msec); 0 zone resets 00:16:33.251 slat (usec): min=3, max=1258, avg= 6.50, stdev= 8.36 00:16:33.251 clat (usec): min=364, max=123763, avg=11465.40, stdev=19576.89 00:16:33.251 lat (usec): min=383, max=123769, avg=11471.90, stdev=19576.96 00:16:33.251 clat percentiles (usec): 00:16:33.251 | 1.00th=[ 734], 5.00th=[ 996], 10.00th=[ 1254], 20.00th=[ 1860], 00:16:33.251 | 30.00th=[ 3359], 40.00th=[ 4752], 50.00th=[ 5407], 60.00th=[ 5866], 00:16:33.251 | 70.00th=[ 7177], 80.00th=[ 11863], 90.00th=[ 27132], 95.00th=[ 64226], 00:16:33.251 | 99.00th=[109577], 99.50th=[113771], 99.90th=[120062], 99.95th=[122160], 00:16:33.251 | 99.99th=[123208] 00:16:33.251 bw ( KiB/s): min= 976, max=49032, per=86.68%, avg=19417.78, stdev=13732.38, samples=27 00:16:33.251 iops : min= 244, max=12258, avg=4854.44, stdev=3433.09, samples=27 00:16:33.251 lat (usec) : 500=0.03%, 750=0.56%, 1000=1.94% 00:16:33.251 lat (msec) : 2=8.42%, 4=6.49%, 10=23.02%, 20=6.14%, 50=47.66% 00:16:33.251 lat (msec) : 100=3.55%, 250=2.15%, 500=0.04% 00:16:33.251 cpu : usr=99.20%, sys=0.14%, ctx=34, majf=0, minf=5529 00:16:33.251 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:33.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.251 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.251 issued rwts: total=65371,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.251 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.251 00:16:33.251 Run status group 0 (all jobs): 00:16:33.251 READ: bw=21.0MiB/s (22.0MB/s), 10.5MiB/s-10.6MiB/s (11.0MB/s-11.1MB/s), io=510MiB (535MB), run=24124-24302msec 00:16:33.251 WRITE: bw=21.9MiB/s (22.9MB/s), 10.9MiB/s-12.8MiB/s (11.5MB/s-13.4MB/s), io=512MiB (537MB), run=20073-23406msec 00:16:33.251 ----------------------------------------------------- 00:16:33.251 Suppressions used: 00:16:33.251 count bytes template 00:16:33.251 2 10 /usr/src/fio/parse.c 00:16:33.251 2 192 /usr/src/fio/iolog.c 00:16:33.251 1 8 libtcmalloc_minimal.so 00:16:33.251 1 904 libcrypto.so 00:16:33.251 ----------------------------------------------------- 00:16:33.251 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:33.251 17:50:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:33.251 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:33.251 fio-3.35 00:16:33.251 Starting 1 thread 00:16:51.368 00:16:51.368 test: (groupid=0, jobs=1): err= 0: pid=85644: Tue Nov 5 17:51:09 2024 00:16:51.368 read: IOPS=7402, BW=28.9MiB/s (30.3MB/s)(255MiB/8808msec) 00:16:51.368 slat (nsec): min=2962, max=36385, avg=3649.40, stdev=711.49 00:16:51.368 clat (usec): min=662, max=33899, avg=17284.20, stdev=2074.23 00:16:51.368 lat (usec): min=669, max=33903, avg=17287.85, stdev=2074.24 00:16:51.368 clat percentiles (usec): 00:16:51.368 | 1.00th=[14615], 5.00th=[15139], 10.00th=[15401], 20.00th=[15795], 00:16:51.368 | 30.00th=[16188], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:16:51.368 | 70.00th=[17695], 80.00th=[18220], 90.00th=[19792], 95.00th=[21890], 00:16:51.368 | 99.00th=[24511], 99.50th=[26346], 99.90th=[28181], 99.95th=[28967], 00:16:51.368 | 99.99th=[32900] 00:16:51.368 write: IOPS=8687, BW=33.9MiB/s (35.6MB/s)(256MiB/7544msec); 0 zone resets 00:16:51.368 slat (usec): min=4, max=682, avg= 8.29, stdev= 7.29 00:16:51.368 clat (usec): min=603, max=69420, avg=14651.90, stdev=17088.80 00:16:51.368 lat (usec): min=609, max=69443, avg=14660.19, stdev=17088.85 00:16:51.368 clat percentiles (usec): 00:16:51.368 | 1.00th=[ 1029], 5.00th=[ 1319], 10.00th=[ 1532], 20.00th=[ 1926], 00:16:51.368 | 30.00th=[ 2343], 40.00th=[ 3392], 50.00th=[ 9241], 60.00th=[12518], 00:16:51.368 | 70.00th=[15795], 80.00th=[19006], 90.00th=[44303], 95.00th=[56886], 00:16:51.368 | 99.00th=[63177], 99.50th=[64226], 99.90th=[66847], 99.95th=[67634], 00:16:51.368 | 99.99th=[68682] 00:16:51.368 bw ( KiB/s): min= 1872, max=51696, per=94.30%, avg=32768.00, stdev=10649.07, samples=16 00:16:51.368 iops : min= 468, max=12924, avg=8191.88, stdev=2662.17, samples=16 00:16:51.368 lat (usec) : 750=0.03%, 1000=0.37% 00:16:51.368 lat (msec) : 2=10.59%, 4=9.65%, 10=5.55%, 20=59.96%, 50=9.60% 00:16:51.368 lat (msec) : 100=4.24% 00:16:51.368 cpu : usr=98.97%, sys=0.25%, ctx=33, majf=0, minf=5577 00:16:51.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:51.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.368 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:51.368 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.368 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:51.368 00:16:51.368 Run status group 0 (all jobs): 00:16:51.368 READ: bw=28.9MiB/s (30.3MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=255MiB (267MB), run=8808-8808msec 00:16:51.368 WRITE: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=256MiB (268MB), run=7544-7544msec 00:16:51.368 ----------------------------------------------------- 00:16:51.368 Suppressions used: 00:16:51.368 count bytes template 00:16:51.368 1 5 /usr/src/fio/parse.c 00:16:51.368 2 192 /usr/src/fio/iolog.c 00:16:51.368 1 8 libtcmalloc_minimal.so 00:16:51.368 1 904 libcrypto.so 00:16:51.368 ----------------------------------------------------- 00:16:51.368 00:16:51.368 17:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:51.368 17:51:10 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:51.368 17:51:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:51.368 17:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:51.368 Remove shared memory files 00:16:51.368 17:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:51.368 17:51:10 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:51.368 17:51:10 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:51.368 17:51:10 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:51.368 17:51:10 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70539 /dev/shm/spdk_tgt_trace.pid84014 00:16:51.368 17:51:11 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:51.368 17:51:11 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:51.368 ************************************ 00:16:51.368 END TEST ftl_fio_basic 00:16:51.368 ************************************ 00:16:51.368 00:16:51.368 real 1m7.384s 00:16:51.368 user 2m32.367s 00:16:51.368 sys 0m3.184s 00:16:51.368 17:51:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:51.368 17:51:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:51.368 17:51:11 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:51.368 17:51:11 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:51.368 17:51:11 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:51.368 17:51:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:51.368 ************************************ 00:16:51.368 START TEST ftl_bdevperf 00:16:51.368 ************************************ 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:51.368 * Looking for test storage... 00:16:51.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:51.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.368 --rc genhtml_branch_coverage=1 00:16:51.368 --rc genhtml_function_coverage=1 00:16:51.368 --rc genhtml_legend=1 00:16:51.368 --rc geninfo_all_blocks=1 00:16:51.368 --rc geninfo_unexecuted_blocks=1 00:16:51.368 00:16:51.368 ' 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:51.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.368 --rc genhtml_branch_coverage=1 00:16:51.368 --rc genhtml_function_coverage=1 00:16:51.368 --rc genhtml_legend=1 00:16:51.368 --rc geninfo_all_blocks=1 00:16:51.368 --rc geninfo_unexecuted_blocks=1 00:16:51.368 00:16:51.368 ' 00:16:51.368 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:51.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.368 --rc genhtml_branch_coverage=1 00:16:51.369 --rc genhtml_function_coverage=1 00:16:51.369 --rc genhtml_legend=1 00:16:51.369 --rc geninfo_all_blocks=1 00:16:51.369 --rc geninfo_unexecuted_blocks=1 00:16:51.369 00:16:51.369 ' 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:51.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.369 --rc genhtml_branch_coverage=1 00:16:51.369 --rc genhtml_function_coverage=1 00:16:51.369 --rc genhtml_legend=1 00:16:51.369 --rc geninfo_all_blocks=1 00:16:51.369 --rc geninfo_unexecuted_blocks=1 00:16:51.369 00:16:51.369 ' 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=85904 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 85904 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 85904 ']' 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:51.369 17:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:51.369 [2024-11-05 17:51:11.308145] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:16:51.369 [2024-11-05 17:51:11.308392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85904 ] 00:16:51.630 [2024-11-05 17:51:11.440721] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:51.630 [2024-11-05 17:51:11.463530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.630 [2024-11-05 17:51:11.488861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.200 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:52.200 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:16:52.200 17:51:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:52.200 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:52.200 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:52.200 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:52.200 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:52.200 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:52.460 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:52.460 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:52.460 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:52.460 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:52.460 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:52.460 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:52.460 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:52.460 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:52.720 { 00:16:52.720 "name": "nvme0n1", 00:16:52.720 "aliases": [ 00:16:52.720 "8fab5109-705e-41a2-bcb4-87022941b410" 00:16:52.720 ], 00:16:52.720 "product_name": "NVMe disk", 00:16:52.720 "block_size": 4096, 00:16:52.720 "num_blocks": 1310720, 00:16:52.720 "uuid": "8fab5109-705e-41a2-bcb4-87022941b410", 00:16:52.720 "numa_id": -1, 00:16:52.720 "assigned_rate_limits": { 00:16:52.720 "rw_ios_per_sec": 0, 00:16:52.720 "rw_mbytes_per_sec": 0, 00:16:52.720 "r_mbytes_per_sec": 0, 00:16:52.720 "w_mbytes_per_sec": 0 00:16:52.720 }, 00:16:52.720 "claimed": true, 00:16:52.720 "claim_type": "read_many_write_one", 00:16:52.720 "zoned": false, 00:16:52.720 "supported_io_types": { 00:16:52.720 "read": true, 00:16:52.720 "write": true, 00:16:52.720 "unmap": true, 00:16:52.720 "flush": true, 00:16:52.720 "reset": true, 00:16:52.720 "nvme_admin": true, 00:16:52.720 "nvme_io": true, 00:16:52.720 "nvme_io_md": false, 00:16:52.720 "write_zeroes": true, 00:16:52.720 "zcopy": false, 00:16:52.720 "get_zone_info": false, 00:16:52.720 "zone_management": false, 00:16:52.720 "zone_append": false, 00:16:52.720 "compare": true, 00:16:52.720 "compare_and_write": false, 00:16:52.720 "abort": true, 00:16:52.720 "seek_hole": false, 00:16:52.720 "seek_data": false, 00:16:52.720 "copy": true, 00:16:52.720 "nvme_iov_md": false 00:16:52.720 }, 00:16:52.720 "driver_specific": { 00:16:52.720 "nvme": [ 00:16:52.720 { 00:16:52.720 "pci_address": "0000:00:11.0", 00:16:52.720 "trid": { 00:16:52.720 "trtype": "PCIe", 00:16:52.720 "traddr": "0000:00:11.0" 00:16:52.720 }, 00:16:52.720 "ctrlr_data": { 00:16:52.720 "cntlid": 0, 00:16:52.720 "vendor_id": "0x1b36", 00:16:52.720 "model_number": "QEMU NVMe Ctrl", 00:16:52.720 "serial_number": "12341", 00:16:52.720 "firmware_revision": "8.0.0", 00:16:52.720 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:52.720 "oacs": { 00:16:52.720 "security": 0, 00:16:52.720 "format": 1, 00:16:52.720 "firmware": 0, 00:16:52.720 "ns_manage": 1 00:16:52.720 }, 00:16:52.720 "multi_ctrlr": false, 00:16:52.720 "ana_reporting": false 00:16:52.720 }, 00:16:52.720 "vs": { 00:16:52.720 "nvme_version": "1.4" 00:16:52.720 }, 00:16:52.720 "ns_data": { 00:16:52.720 "id": 1, 00:16:52.720 "can_share": false 00:16:52.720 } 00:16:52.720 } 00:16:52.720 ], 00:16:52.720 "mp_policy": "active_passive" 00:16:52.720 } 00:16:52.720 } 00:16:52.720 ]' 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:52.720 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:52.981 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=b9494dc1-b136-4097-89b6-edad4a883edb 00:16:52.981 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:52.981 17:51:12 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9494dc1-b136-4097-89b6-edad4a883edb 00:16:53.242 17:51:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:53.503 17:51:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=602d0787-0a24-4377-b57d-edd3497e8cba 00:16:53.503 17:51:13 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 602d0787-0a24-4377-b57d-edd3497e8cba 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=95e2826d-80f4-4415-b162-ea8ec931944b 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 95e2826d-80f4-4415-b162-ea8ec931944b 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=95e2826d-80f4-4415-b162-ea8ec931944b 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 95e2826d-80f4-4415-b162-ea8ec931944b 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=95e2826d-80f4-4415-b162-ea8ec931944b 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95e2826d-80f4-4415-b162-ea8ec931944b 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:53.764 { 00:16:53.764 "name": "95e2826d-80f4-4415-b162-ea8ec931944b", 00:16:53.764 "aliases": [ 00:16:53.764 "lvs/nvme0n1p0" 00:16:53.764 ], 00:16:53.764 "product_name": "Logical Volume", 00:16:53.764 "block_size": 4096, 00:16:53.764 "num_blocks": 26476544, 00:16:53.764 "uuid": "95e2826d-80f4-4415-b162-ea8ec931944b", 00:16:53.764 "assigned_rate_limits": { 00:16:53.764 "rw_ios_per_sec": 0, 00:16:53.764 "rw_mbytes_per_sec": 0, 00:16:53.764 "r_mbytes_per_sec": 0, 00:16:53.764 "w_mbytes_per_sec": 0 00:16:53.764 }, 00:16:53.764 "claimed": false, 00:16:53.764 "zoned": false, 00:16:53.764 "supported_io_types": { 00:16:53.764 "read": true, 00:16:53.764 "write": true, 00:16:53.764 "unmap": true, 00:16:53.764 "flush": false, 00:16:53.764 "reset": true, 00:16:53.764 "nvme_admin": false, 00:16:53.764 "nvme_io": false, 00:16:53.764 "nvme_io_md": false, 00:16:53.764 "write_zeroes": true, 00:16:53.764 "zcopy": false, 00:16:53.764 "get_zone_info": false, 00:16:53.764 "zone_management": false, 00:16:53.764 "zone_append": false, 00:16:53.764 "compare": false, 00:16:53.764 "compare_and_write": false, 00:16:53.764 "abort": false, 00:16:53.764 "seek_hole": true, 00:16:53.764 "seek_data": true, 00:16:53.764 "copy": false, 00:16:53.764 "nvme_iov_md": false 00:16:53.764 }, 00:16:53.764 "driver_specific": { 00:16:53.764 "lvol": { 00:16:53.764 "lvol_store_uuid": "602d0787-0a24-4377-b57d-edd3497e8cba", 00:16:53.764 "base_bdev": "nvme0n1", 00:16:53.764 "thin_provision": true, 00:16:53.764 "num_allocated_clusters": 0, 00:16:53.764 "snapshot": false, 00:16:53.764 "clone": false, 00:16:53.764 "esnap_clone": false 00:16:53.764 } 00:16:53.764 } 00:16:53.764 } 00:16:53.764 ]' 00:16:53.764 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:54.026 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:54.026 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:54.026 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:54.026 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:54.026 17:51:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:54.026 17:51:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:54.026 17:51:13 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:54.026 17:51:13 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:54.287 17:51:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:54.287 17:51:14 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:54.287 17:51:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 95e2826d-80f4-4415-b162-ea8ec931944b 00:16:54.287 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=95e2826d-80f4-4415-b162-ea8ec931944b 00:16:54.287 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:54.287 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:54.287 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:54.287 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95e2826d-80f4-4415-b162-ea8ec931944b 00:16:54.548 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:54.548 { 00:16:54.548 "name": "95e2826d-80f4-4415-b162-ea8ec931944b", 00:16:54.548 "aliases": [ 00:16:54.548 "lvs/nvme0n1p0" 00:16:54.548 ], 00:16:54.548 "product_name": "Logical Volume", 00:16:54.548 "block_size": 4096, 00:16:54.548 "num_blocks": 26476544, 00:16:54.548 "uuid": "95e2826d-80f4-4415-b162-ea8ec931944b", 00:16:54.548 "assigned_rate_limits": { 00:16:54.548 "rw_ios_per_sec": 0, 00:16:54.548 "rw_mbytes_per_sec": 0, 00:16:54.548 "r_mbytes_per_sec": 0, 00:16:54.548 "w_mbytes_per_sec": 0 00:16:54.548 }, 00:16:54.548 "claimed": false, 00:16:54.548 "zoned": false, 00:16:54.548 "supported_io_types": { 00:16:54.548 "read": true, 00:16:54.548 "write": true, 00:16:54.548 "unmap": true, 00:16:54.548 "flush": false, 00:16:54.548 "reset": true, 00:16:54.548 "nvme_admin": false, 00:16:54.548 "nvme_io": false, 00:16:54.548 "nvme_io_md": false, 00:16:54.548 "write_zeroes": true, 00:16:54.548 "zcopy": false, 00:16:54.548 "get_zone_info": false, 00:16:54.548 "zone_management": false, 00:16:54.548 "zone_append": false, 00:16:54.548 "compare": false, 00:16:54.548 "compare_and_write": false, 00:16:54.548 "abort": false, 00:16:54.548 "seek_hole": true, 00:16:54.548 "seek_data": true, 00:16:54.548 "copy": false, 00:16:54.548 "nvme_iov_md": false 00:16:54.548 }, 00:16:54.548 "driver_specific": { 00:16:54.548 "lvol": { 00:16:54.548 "lvol_store_uuid": "602d0787-0a24-4377-b57d-edd3497e8cba", 00:16:54.548 "base_bdev": "nvme0n1", 00:16:54.548 "thin_provision": true, 00:16:54.548 "num_allocated_clusters": 0, 00:16:54.548 "snapshot": false, 00:16:54.548 "clone": false, 00:16:54.548 "esnap_clone": false 00:16:54.548 } 00:16:54.548 } 00:16:54.548 } 00:16:54.548 ]' 00:16:54.548 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:54.548 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:54.548 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:54.548 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:54.548 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:54.548 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:54.548 17:51:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:54.548 17:51:14 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:54.809 17:51:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:54.809 17:51:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 95e2826d-80f4-4415-b162-ea8ec931944b 00:16:54.809 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=95e2826d-80f4-4415-b162-ea8ec931944b 00:16:54.809 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:54.809 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:54.809 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:54.809 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95e2826d-80f4-4415-b162-ea8ec931944b 00:16:54.809 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:54.809 { 00:16:54.809 "name": "95e2826d-80f4-4415-b162-ea8ec931944b", 00:16:54.809 "aliases": [ 00:16:54.809 "lvs/nvme0n1p0" 00:16:54.809 ], 00:16:54.809 "product_name": "Logical Volume", 00:16:54.809 "block_size": 4096, 00:16:54.809 "num_blocks": 26476544, 00:16:54.809 "uuid": "95e2826d-80f4-4415-b162-ea8ec931944b", 00:16:54.809 "assigned_rate_limits": { 00:16:54.809 "rw_ios_per_sec": 0, 00:16:54.809 "rw_mbytes_per_sec": 0, 00:16:54.809 "r_mbytes_per_sec": 0, 00:16:54.809 "w_mbytes_per_sec": 0 00:16:54.809 }, 00:16:54.809 "claimed": false, 00:16:54.809 "zoned": false, 00:16:54.809 "supported_io_types": { 00:16:54.809 "read": true, 00:16:54.809 "write": true, 00:16:54.809 "unmap": true, 00:16:54.809 "flush": false, 00:16:54.809 "reset": true, 00:16:54.809 "nvme_admin": false, 00:16:54.809 "nvme_io": false, 00:16:54.809 "nvme_io_md": false, 00:16:54.809 "write_zeroes": true, 00:16:54.809 "zcopy": false, 00:16:54.809 "get_zone_info": false, 00:16:54.809 "zone_management": false, 00:16:54.809 "zone_append": false, 00:16:54.809 "compare": false, 00:16:54.809 "compare_and_write": false, 00:16:54.809 "abort": false, 00:16:54.809 "seek_hole": true, 00:16:54.809 "seek_data": true, 00:16:54.809 "copy": false, 00:16:54.809 "nvme_iov_md": false 00:16:54.809 }, 00:16:54.809 "driver_specific": { 00:16:54.809 "lvol": { 00:16:54.809 "lvol_store_uuid": "602d0787-0a24-4377-b57d-edd3497e8cba", 00:16:54.809 "base_bdev": "nvme0n1", 00:16:54.809 "thin_provision": true, 00:16:54.809 "num_allocated_clusters": 0, 00:16:54.809 "snapshot": false, 00:16:54.809 "clone": false, 00:16:54.809 "esnap_clone": false 00:16:54.809 } 00:16:54.809 } 00:16:54.809 } 00:16:54.809 ]' 00:16:54.809 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:55.070 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:55.070 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:55.070 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:55.070 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:55.070 17:51:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:55.070 17:51:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:55.070 17:51:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 95e2826d-80f4-4415-b162-ea8ec931944b -c nvc0n1p0 --l2p_dram_limit 20 00:16:55.070 [2024-11-05 17:51:15.032342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.070 [2024-11-05 17:51:15.032415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:55.070 [2024-11-05 17:51:15.032428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:55.070 [2024-11-05 17:51:15.032437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.070 [2024-11-05 17:51:15.032492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.070 [2024-11-05 17:51:15.032504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.070 [2024-11-05 17:51:15.032512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:55.070 [2024-11-05 17:51:15.032521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.070 [2024-11-05 17:51:15.032537] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:55.070 [2024-11-05 17:51:15.032809] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:55.070 [2024-11-05 17:51:15.032820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.070 [2024-11-05 17:51:15.032832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.070 [2024-11-05 17:51:15.032840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:16:55.070 [2024-11-05 17:51:15.032848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.070 [2024-11-05 17:51:15.032875] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 50d60535-ce4c-4a36-920d-dbeeb1ef0780 00:16:55.070 [2024-11-05 17:51:15.034218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.070 [2024-11-05 17:51:15.034251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:55.070 [2024-11-05 17:51:15.034261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:55.070 [2024-11-05 17:51:15.034268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.070 [2024-11-05 17:51:15.041109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.070 [2024-11-05 17:51:15.041150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.070 [2024-11-05 17:51:15.041164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.779 ms 00:16:55.070 [2024-11-05 17:51:15.041172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.070 [2024-11-05 17:51:15.041256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.070 [2024-11-05 17:51:15.041264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.070 [2024-11-05 17:51:15.041276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:55.070 [2024-11-05 17:51:15.041283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.070 [2024-11-05 17:51:15.041339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.070 [2024-11-05 17:51:15.041346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:55.070 [2024-11-05 17:51:15.041355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:55.070 [2024-11-05 17:51:15.041361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.070 [2024-11-05 17:51:15.041381] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:55.070 [2024-11-05 17:51:15.043059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.070 [2024-11-05 17:51:15.043105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.070 [2024-11-05 17:51:15.043116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:16:55.070 [2024-11-05 17:51:15.043124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.070 [2024-11-05 17:51:15.043156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.070 [2024-11-05 17:51:15.043172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:55.070 [2024-11-05 17:51:15.043179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:55.070 [2024-11-05 17:51:15.043188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.070 [2024-11-05 17:51:15.043212] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:55.070 [2024-11-05 17:51:15.043342] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:55.070 [2024-11-05 17:51:15.043351] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:55.070 [2024-11-05 17:51:15.043362] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:55.070 [2024-11-05 17:51:15.043370] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:55.070 [2024-11-05 17:51:15.043379] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:55.070 [2024-11-05 17:51:15.043385] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:55.070 [2024-11-05 17:51:15.043393] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:55.070 [2024-11-05 17:51:15.043398] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:55.070 [2024-11-05 17:51:15.043409] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:55.070 [2024-11-05 17:51:15.043415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.070 [2024-11-05 17:51:15.043425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:55.070 [2024-11-05 17:51:15.043432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:16:55.070 [2024-11-05 17:51:15.043440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.070 [2024-11-05 17:51:15.043503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.070 [2024-11-05 17:51:15.043511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:55.070 [2024-11-05 17:51:15.043516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:55.070 [2024-11-05 17:51:15.043523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.070 [2024-11-05 17:51:15.043595] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:55.070 [2024-11-05 17:51:15.043604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:55.070 [2024-11-05 17:51:15.043610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.070 [2024-11-05 17:51:15.043618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.070 [2024-11-05 17:51:15.043624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:55.070 [2024-11-05 17:51:15.043631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:55.070 [2024-11-05 17:51:15.043642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:55.070 [2024-11-05 17:51:15.043649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:55.070 [2024-11-05 17:51:15.043654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:55.070 [2024-11-05 17:51:15.043661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.070 [2024-11-05 17:51:15.043665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:55.070 [2024-11-05 17:51:15.043675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:55.070 [2024-11-05 17:51:15.043680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.070 [2024-11-05 17:51:15.043686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:55.070 [2024-11-05 17:51:15.043691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:55.070 [2024-11-05 17:51:15.043700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.070 [2024-11-05 17:51:15.043706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:55.070 [2024-11-05 17:51:15.043712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:55.070 [2024-11-05 17:51:15.043717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.070 [2024-11-05 17:51:15.043724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:55.070 [2024-11-05 17:51:15.043729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:55.070 [2024-11-05 17:51:15.043736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.070 [2024-11-05 17:51:15.043742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:55.070 [2024-11-05 17:51:15.043748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:55.070 [2024-11-05 17:51:15.043753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.070 [2024-11-05 17:51:15.043760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:55.070 [2024-11-05 17:51:15.043765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:55.070 [2024-11-05 17:51:15.043774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.070 [2024-11-05 17:51:15.043779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:55.070 [2024-11-05 17:51:15.043786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:55.071 [2024-11-05 17:51:15.043791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.071 [2024-11-05 17:51:15.043798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:55.071 [2024-11-05 17:51:15.043803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:55.071 [2024-11-05 17:51:15.043809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.071 [2024-11-05 17:51:15.043814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:55.071 [2024-11-05 17:51:15.043821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:55.071 [2024-11-05 17:51:15.043826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.071 [2024-11-05 17:51:15.043832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:55.071 [2024-11-05 17:51:15.043837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:55.071 [2024-11-05 17:51:15.043843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.071 [2024-11-05 17:51:15.043848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:55.071 [2024-11-05 17:51:15.043855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:55.071 [2024-11-05 17:51:15.043860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.071 [2024-11-05 17:51:15.043868] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:55.071 [2024-11-05 17:51:15.043873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:55.071 [2024-11-05 17:51:15.043880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.071 [2024-11-05 17:51:15.043886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.071 [2024-11-05 17:51:15.043895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:55.071 [2024-11-05 17:51:15.043901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:55.071 [2024-11-05 17:51:15.043907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:55.071 [2024-11-05 17:51:15.043913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:55.071 [2024-11-05 17:51:15.043920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:55.071 [2024-11-05 17:51:15.043925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:55.071 [2024-11-05 17:51:15.043936] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:55.071 [2024-11-05 17:51:15.043945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.071 [2024-11-05 17:51:15.043954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:55.071 [2024-11-05 17:51:15.043960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:55.071 [2024-11-05 17:51:15.043967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:55.071 [2024-11-05 17:51:15.043973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:55.071 [2024-11-05 17:51:15.043981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:55.071 [2024-11-05 17:51:15.043987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:55.071 [2024-11-05 17:51:15.043994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:55.071 [2024-11-05 17:51:15.043999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:55.071 [2024-11-05 17:51:15.044006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:55.071 [2024-11-05 17:51:15.044012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:55.071 [2024-11-05 17:51:15.044019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:55.071 [2024-11-05 17:51:15.044024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:55.071 [2024-11-05 17:51:15.044031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:55.071 [2024-11-05 17:51:15.044036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:55.071 [2024-11-05 17:51:15.044043] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:55.071 [2024-11-05 17:51:15.044051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.071 [2024-11-05 17:51:15.044059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:55.071 [2024-11-05 17:51:15.044080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:55.071 [2024-11-05 17:51:15.044088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:55.071 [2024-11-05 17:51:15.044094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:55.071 [2024-11-05 17:51:15.044105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.071 [2024-11-05 17:51:15.044111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:55.071 [2024-11-05 17:51:15.044119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:16:55.071 [2024-11-05 17:51:15.044125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.071 [2024-11-05 17:51:15.044155] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:55.071 [2024-11-05 17:51:15.044164] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:58.372 [2024-11-05 17:51:17.707234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.372 [2024-11-05 17:51:17.707310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:58.372 [2024-11-05 17:51:17.707337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2663.058 ms 00:16:58.372 [2024-11-05 17:51:17.707345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.372 [2024-11-05 17:51:17.718050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.372 [2024-11-05 17:51:17.718125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.372 [2024-11-05 17:51:17.718144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.632 ms 00:16:58.372 [2024-11-05 17:51:17.718152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.372 [2024-11-05 17:51:17.718276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.372 [2024-11-05 17:51:17.718285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:58.372 [2024-11-05 17:51:17.718298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:58.372 [2024-11-05 17:51:17.718305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.372 [2024-11-05 17:51:17.752179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.372 [2024-11-05 17:51:17.752242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.372 [2024-11-05 17:51:17.752261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.826 ms 00:16:58.372 [2024-11-05 17:51:17.752272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.372 [2024-11-05 17:51:17.752338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.372 [2024-11-05 17:51:17.752353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.372 [2024-11-05 17:51:17.752365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.372 [2024-11-05 17:51:17.752376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.372 [2024-11-05 17:51:17.752862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.372 [2024-11-05 17:51:17.752896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.372 [2024-11-05 17:51:17.752913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:16:58.372 [2024-11-05 17:51:17.752923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.372 [2024-11-05 17:51:17.753106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.372 [2024-11-05 17:51:17.753117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.373 [2024-11-05 17:51:17.753132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:16:58.373 [2024-11-05 17:51:17.753141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.759841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.760025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.373 [2024-11-05 17:51:17.760054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.676 ms 00:16:58.373 [2024-11-05 17:51:17.760097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.767919] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:58.373 [2024-11-05 17:51:17.773282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.773314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:58.373 [2024-11-05 17:51:17.773325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.099 ms 00:16:58.373 [2024-11-05 17:51:17.773334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.828975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.829057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:58.373 [2024-11-05 17:51:17.829085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.599 ms 00:16:58.373 [2024-11-05 17:51:17.829097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.829262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.829273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:58.373 [2024-11-05 17:51:17.829280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:16:58.373 [2024-11-05 17:51:17.829287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.832500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.832542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:58.373 [2024-11-05 17:51:17.832552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:16:58.373 [2024-11-05 17:51:17.832566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.835533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.835572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:58.373 [2024-11-05 17:51:17.835581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.929 ms 00:16:58.373 [2024-11-05 17:51:17.835589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.835847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.835860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.373 [2024-11-05 17:51:17.835867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:16:58.373 [2024-11-05 17:51:17.835874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.872214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.872284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:58.373 [2024-11-05 17:51:17.872297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.322 ms 00:16:58.373 [2024-11-05 17:51:17.872305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.877790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.877839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:58.373 [2024-11-05 17:51:17.877850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.432 ms 00:16:58.373 [2024-11-05 17:51:17.877859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.881852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.881892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:58.373 [2024-11-05 17:51:17.881901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.961 ms 00:16:58.373 [2024-11-05 17:51:17.881908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.886768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.886820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:58.373 [2024-11-05 17:51:17.886829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:16:58.373 [2024-11-05 17:51:17.886837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.886870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.886884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:58.373 [2024-11-05 17:51:17.886891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:58.373 [2024-11-05 17:51:17.886899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.886962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.373 [2024-11-05 17:51:17.886970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:58.373 [2024-11-05 17:51:17.886977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:58.373 [2024-11-05 17:51:17.886984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.373 [2024-11-05 17:51:17.888300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2855.566 ms, result 0 00:16:58.373 { 00:16:58.373 "name": "ftl0", 00:16:58.373 "uuid": "50d60535-ce4c-4a36-920d-dbeeb1ef0780" 00:16:58.373 } 00:16:58.373 17:51:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:58.373 17:51:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:58.373 17:51:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:58.373 17:51:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:58.373 [2024-11-05 17:51:18.209360] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:58.373 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:58.373 Zero copy mechanism will not be used. 00:16:58.373 Running I/O for 4 seconds... 00:17:00.253 1600.00 IOPS, 106.25 MiB/s [2024-11-05T17:51:21.628Z] 1381.50 IOPS, 91.74 MiB/s [2024-11-05T17:51:22.571Z] 1263.67 IOPS, 83.92 MiB/s [2024-11-05T17:51:22.571Z] 1238.75 IOPS, 82.26 MiB/s 00:17:02.580 Latency(us) 00:17:02.580 [2024-11-05T17:51:22.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.580 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:02.580 ftl0 : 4.00 1238.39 82.24 0.00 0.00 844.60 177.23 104051.00 00:17:02.580 [2024-11-05T17:51:22.571Z] =================================================================================================================== 00:17:02.580 [2024-11-05T17:51:22.571Z] Total : 1238.39 82.24 0.00 0.00 844.60 177.23 104051.00 00:17:02.580 [2024-11-05 17:51:22.217168] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:02.580 { 00:17:02.580 "results": [ 00:17:02.580 { 00:17:02.580 "job": "ftl0", 00:17:02.580 "core_mask": "0x1", 00:17:02.580 "workload": "randwrite", 00:17:02.580 "status": "finished", 00:17:02.580 "queue_depth": 1, 00:17:02.580 "io_size": 69632, 00:17:02.580 "runtime": 4.001981, 00:17:02.580 "iops": 1238.3866889922765, 00:17:02.580 "mibps": 82.23661606589336, 00:17:02.580 "io_failed": 0, 00:17:02.580 "io_timeout": 0, 00:17:02.580 "avg_latency_us": 844.6014080834419, 00:17:02.580 "min_latency_us": 177.23076923076923, 00:17:02.580 "max_latency_us": 104051.00307692308 00:17:02.580 } 00:17:02.580 ], 00:17:02.580 "core_count": 1 00:17:02.580 } 00:17:02.580 17:51:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:02.580 [2024-11-05 17:51:22.327838] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:02.580 Running I/O for 4 seconds... 00:17:04.467 8236.00 IOPS, 32.17 MiB/s [2024-11-05T17:51:25.400Z] 7738.50 IOPS, 30.23 MiB/s [2024-11-05T17:51:26.408Z] 7743.67 IOPS, 30.25 MiB/s [2024-11-05T17:51:26.408Z] 7697.00 IOPS, 30.07 MiB/s 00:17:06.417 Latency(us) 00:17:06.417 [2024-11-05T17:51:26.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.417 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.417 ftl0 : 4.03 7670.48 29.96 0.00 0.00 16627.41 264.66 61301.37 00:17:06.417 [2024-11-05T17:51:26.408Z] =================================================================================================================== 00:17:06.417 [2024-11-05T17:51:26.408Z] Total : 7670.48 29.96 0.00 0.00 16627.41 0.00 61301.37 00:17:06.417 [2024-11-05 17:51:26.365325] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:06.417 { 00:17:06.417 "results": [ 00:17:06.417 { 00:17:06.417 "job": "ftl0", 00:17:06.417 "core_mask": "0x1", 00:17:06.417 "workload": "randwrite", 00:17:06.417 "status": "finished", 00:17:06.417 "queue_depth": 128, 00:17:06.417 "io_size": 4096, 00:17:06.417 "runtime": 4.030127, 00:17:06.417 "iops": 7670.477878240562, 00:17:06.417 "mibps": 29.962804211877195, 00:17:06.417 "io_failed": 0, 00:17:06.417 "io_timeout": 0, 00:17:06.417 "avg_latency_us": 16627.411956831704, 00:17:06.417 "min_latency_us": 264.6646153846154, 00:17:06.417 "max_latency_us": 61301.36615384615 00:17:06.417 } 00:17:06.417 ], 00:17:06.417 "core_count": 1 00:17:06.417 } 00:17:06.417 17:51:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:06.679 [2024-11-05 17:51:26.471274] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:06.679 Running I/O for 4 seconds... 00:17:08.564 6640.00 IOPS, 25.94 MiB/s [2024-11-05T17:51:29.496Z] 6502.50 IOPS, 25.40 MiB/s [2024-11-05T17:51:30.881Z] 6390.00 IOPS, 24.96 MiB/s [2024-11-05T17:51:30.881Z] 6224.75 IOPS, 24.32 MiB/s 00:17:10.890 Latency(us) 00:17:10.890 [2024-11-05T17:51:30.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.890 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.890 Verification LBA range: start 0x0 length 0x1400000 00:17:10.890 ftl0 : 4.01 6240.00 24.38 0.00 0.00 20457.51 259.94 37506.76 00:17:10.890 [2024-11-05T17:51:30.881Z] =================================================================================================================== 00:17:10.890 [2024-11-05T17:51:30.881Z] Total : 6240.00 24.38 0.00 0.00 20457.51 0.00 37506.76 00:17:10.890 [2024-11-05 17:51:30.489503] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:17:10.890 "results": [ 00:17:10.890 { 00:17:10.890 "job": "ftl0", 00:17:10.890 "core_mask": "0x1", 00:17:10.890 "workload": "verify", 00:17:10.890 "status": "finished", 00:17:10.890 "verify_range": { 00:17:10.890 "start": 0, 00:17:10.890 "length": 20971520 00:17:10.890 }, 00:17:10.890 "queue_depth": 128, 00:17:10.890 "io_size": 4096, 00:17:10.890 "runtime": 4.010736, 00:17:10.890 "iops": 6240.0018350746595, 00:17:10.890 "mibps": 24.37500716826039, 00:17:10.890 "io_failed": 0, 00:17:10.890 "io_timeout": 0, 00:17:10.890 "avg_latency_us": 20457.514212527392, 00:17:10.890 "min_latency_us": 259.9384615384615, 00:17:10.890 "max_latency_us": 37506.75692307692 00:17:10.890 } 00:17:10.890 ], 00:17:10.890 "core_count": 1 00:17:10.890 } 00:17:10.890 l0 00:17:10.890 17:51:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:10.890 [2024-11-05 17:51:30.697866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.890 [2024-11-05 17:51:30.697934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:10.890 [2024-11-05 17:51:30.697950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.890 [2024-11-05 17:51:30.697960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.890 [2024-11-05 17:51:30.697985] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:10.890 [2024-11-05 17:51:30.698562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.890 [2024-11-05 17:51:30.698588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:10.890 [2024-11-05 17:51:30.698600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:17:10.890 [2024-11-05 17:51:30.698609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.890 [2024-11-05 17:51:30.700811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.890 [2024-11-05 17:51:30.700935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:10.890 [2024-11-05 17:51:30.700962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.177 ms 00:17:10.890 [2024-11-05 17:51:30.700970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.890 [2024-11-05 17:51:30.879945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.890 [2024-11-05 17:51:30.880020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:10.890 [2024-11-05 17:51:30.880043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.939 ms 00:17:10.890 [2024-11-05 17:51:30.880052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.191 [2024-11-05 17:51:30.886271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.191 [2024-11-05 17:51:30.886310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:11.191 [2024-11-05 17:51:30.886324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.167 ms 00:17:11.191 [2024-11-05 17:51:30.886333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.191 [2024-11-05 17:51:30.888797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.191 [2024-11-05 17:51:30.888938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:11.191 [2024-11-05 17:51:30.888958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.388 ms 00:17:11.191 [2024-11-05 17:51:30.888966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.191 [2024-11-05 17:51:30.894062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.191 [2024-11-05 17:51:30.894130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:11.191 [2024-11-05 17:51:30.894146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.063 ms 00:17:11.191 [2024-11-05 17:51:30.894162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.191 [2024-11-05 17:51:30.894276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.191 [2024-11-05 17:51:30.894287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:11.191 [2024-11-05 17:51:30.894297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:11.191 [2024-11-05 17:51:30.894305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.191 [2024-11-05 17:51:30.896638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.191 [2024-11-05 17:51:30.896671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:11.191 [2024-11-05 17:51:30.896683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.314 ms 00:17:11.191 [2024-11-05 17:51:30.896690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.191 [2024-11-05 17:51:30.898197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.191 [2024-11-05 17:51:30.898225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:11.191 [2024-11-05 17:51:30.898236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.474 ms 00:17:11.191 [2024-11-05 17:51:30.898244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.191 [2024-11-05 17:51:30.899340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.191 [2024-11-05 17:51:30.899453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:11.191 [2024-11-05 17:51:30.899474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:17:11.191 [2024-11-05 17:51:30.899482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.191 [2024-11-05 17:51:30.900749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.191 [2024-11-05 17:51:30.900780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:11.191 [2024-11-05 17:51:30.900791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:17:11.191 [2024-11-05 17:51:30.900798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.191 [2024-11-05 17:51:30.900827] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:11.191 [2024-11-05 17:51:30.900842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.900999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:11.191 [2024-11-05 17:51:30.901206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:11.192 [2024-11-05 17:51:30.901740] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:11.192 [2024-11-05 17:51:30.901753] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50d60535-ce4c-4a36-920d-dbeeb1ef0780 00:17:11.192 [2024-11-05 17:51:30.901761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:11.192 [2024-11-05 17:51:30.901775] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:11.192 [2024-11-05 17:51:30.901782] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:11.192 [2024-11-05 17:51:30.901793] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:11.192 [2024-11-05 17:51:30.901800] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:11.192 [2024-11-05 17:51:30.901809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:11.192 [2024-11-05 17:51:30.901816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:11.192 [2024-11-05 17:51:30.901824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:11.192 [2024-11-05 17:51:30.901831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:11.192 [2024-11-05 17:51:30.901839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.192 [2024-11-05 17:51:30.901850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:11.192 [2024-11-05 17:51:30.901860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:17:11.192 [2024-11-05 17:51:30.901869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.192 [2024-11-05 17:51:30.903744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.192 [2024-11-05 17:51:30.903765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:11.192 [2024-11-05 17:51:30.903777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:17:11.192 [2024-11-05 17:51:30.903785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.192 [2024-11-05 17:51:30.903879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.192 [2024-11-05 17:51:30.903888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:11.192 [2024-11-05 17:51:30.903903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:11.192 [2024-11-05 17:51:30.903910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.192 [2024-11-05 17:51:30.910148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.192 [2024-11-05 17:51:30.910300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:11.192 [2024-11-05 17:51:30.910318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.192 [2024-11-05 17:51:30.910327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.192 [2024-11-05 17:51:30.910393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.192 [2024-11-05 17:51:30.910402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:11.192 [2024-11-05 17:51:30.910415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.192 [2024-11-05 17:51:30.910422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.192 [2024-11-05 17:51:30.910498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.192 [2024-11-05 17:51:30.910513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:11.192 [2024-11-05 17:51:30.910524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.192 [2024-11-05 17:51:30.910532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.192 [2024-11-05 17:51:30.910549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.192 [2024-11-05 17:51:30.910557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:11.192 [2024-11-05 17:51:30.910568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.193 [2024-11-05 17:51:30.910578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.193 [2024-11-05 17:51:30.922329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.193 [2024-11-05 17:51:30.922386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:11.193 [2024-11-05 17:51:30.922400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.193 [2024-11-05 17:51:30.922410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.193 [2024-11-05 17:51:30.932048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.193 [2024-11-05 17:51:30.932109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:11.193 [2024-11-05 17:51:30.932128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.193 [2024-11-05 17:51:30.932136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.193 [2024-11-05 17:51:30.932224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.193 [2024-11-05 17:51:30.932234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:11.193 [2024-11-05 17:51:30.932244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.193 [2024-11-05 17:51:30.932258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.193 [2024-11-05 17:51:30.932291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.193 [2024-11-05 17:51:30.932300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:11.193 [2024-11-05 17:51:30.932313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.193 [2024-11-05 17:51:30.932320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.193 [2024-11-05 17:51:30.932391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.193 [2024-11-05 17:51:30.932400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:11.193 [2024-11-05 17:51:30.932410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.193 [2024-11-05 17:51:30.932417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.193 [2024-11-05 17:51:30.932451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.193 [2024-11-05 17:51:30.932462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:11.193 [2024-11-05 17:51:30.932472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.193 [2024-11-05 17:51:30.932479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.193 [2024-11-05 17:51:30.932522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.193 [2024-11-05 17:51:30.932530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:11.193 [2024-11-05 17:51:30.932540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.193 [2024-11-05 17:51:30.932548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.193 [2024-11-05 17:51:30.932606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:11.193 [2024-11-05 17:51:30.932619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:11.193 [2024-11-05 17:51:30.932631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:11.193 [2024-11-05 17:51:30.932639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.193 [2024-11-05 17:51:30.932778] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 234.863 ms, result 0 00:17:11.193 true 00:17:11.193 17:51:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 85904 00:17:11.193 17:51:30 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 85904 ']' 00:17:11.193 17:51:30 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 85904 00:17:11.193 17:51:30 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:17:11.193 17:51:30 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:11.193 17:51:30 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85904 00:17:11.193 killing process with pid 85904 00:17:11.193 Received shutdown signal, test time was about 4.000000 seconds 00:17:11.193 00:17:11.193 Latency(us) 00:17:11.193 [2024-11-05T17:51:31.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.193 [2024-11-05T17:51:31.184Z] =================================================================================================================== 00:17:11.193 [2024-11-05T17:51:31.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:11.193 17:51:30 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:11.193 17:51:30 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:11.193 17:51:30 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85904' 00:17:11.193 17:51:30 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 85904 00:17:11.193 17:51:30 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 85904 00:17:11.454 Remove shared memory files 00:17:11.454 17:51:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:11.454 17:51:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:11.454 17:51:31 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:11.454 17:51:31 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:11.454 17:51:31 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:11.454 17:51:31 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:11.454 17:51:31 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:11.454 17:51:31 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:11.454 ************************************ 00:17:11.454 END TEST ftl_bdevperf 00:17:11.454 ************************************ 00:17:11.454 00:17:11.454 real 0m20.355s 00:17:11.454 user 0m23.027s 00:17:11.454 sys 0m0.875s 00:17:11.454 17:51:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:11.454 17:51:31 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:11.715 17:51:31 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:11.715 17:51:31 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:11.715 17:51:31 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:11.715 17:51:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:11.715 ************************************ 00:17:11.715 START TEST ftl_trim 00:17:11.715 ************************************ 00:17:11.715 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:11.715 * Looking for test storage... 00:17:11.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:11.715 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:11.715 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:17:11.715 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:11.715 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.715 17:51:31 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:11.715 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.715 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:11.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.715 --rc genhtml_branch_coverage=1 00:17:11.715 --rc genhtml_function_coverage=1 00:17:11.715 --rc genhtml_legend=1 00:17:11.715 --rc geninfo_all_blocks=1 00:17:11.715 --rc geninfo_unexecuted_blocks=1 00:17:11.715 00:17:11.715 ' 00:17:11.715 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:11.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.716 --rc genhtml_branch_coverage=1 00:17:11.716 --rc genhtml_function_coverage=1 00:17:11.716 --rc genhtml_legend=1 00:17:11.716 --rc geninfo_all_blocks=1 00:17:11.716 --rc geninfo_unexecuted_blocks=1 00:17:11.716 00:17:11.716 ' 00:17:11.716 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:11.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.716 --rc genhtml_branch_coverage=1 00:17:11.716 --rc genhtml_function_coverage=1 00:17:11.716 --rc genhtml_legend=1 00:17:11.716 --rc geninfo_all_blocks=1 00:17:11.716 --rc geninfo_unexecuted_blocks=1 00:17:11.716 00:17:11.716 ' 00:17:11.716 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:11.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.716 --rc genhtml_branch_coverage=1 00:17:11.716 --rc genhtml_function_coverage=1 00:17:11.716 --rc genhtml_legend=1 00:17:11.716 --rc geninfo_all_blocks=1 00:17:11.716 --rc geninfo_unexecuted_blocks=1 00:17:11.716 00:17:11.716 ' 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=86234 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 86234 00:17:11.716 17:51:31 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:11.716 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 86234 ']' 00:17:11.716 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.716 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:11.716 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.716 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:11.716 17:51:31 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:11.978 [2024-11-05 17:51:31.727940] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:17:11.978 [2024-11-05 17:51:31.728210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86234 ] 00:17:11.978 [2024-11-05 17:51:31.858980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:11.978 [2024-11-05 17:51:31.887412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:11.978 [2024-11-05 17:51:31.914933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.978 [2024-11-05 17:51:31.915126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.978 [2024-11-05 17:51:31.915184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.550 17:51:32 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.550 17:51:32 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:12.550 17:51:32 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:12.550 17:51:32 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:12.550 17:51:32 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:12.550 17:51:32 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:12.550 17:51:32 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:12.550 17:51:32 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:13.120 17:51:32 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:13.120 17:51:32 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:13.120 17:51:32 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:13.120 17:51:32 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:13.120 17:51:32 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:13.120 17:51:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:13.120 17:51:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:13.120 17:51:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:13.120 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:13.120 { 00:17:13.120 "name": "nvme0n1", 00:17:13.120 "aliases": [ 00:17:13.120 "d386f341-a8a8-4766-85ff-92b3a867bb0e" 00:17:13.120 ], 00:17:13.120 "product_name": "NVMe disk", 00:17:13.120 "block_size": 4096, 00:17:13.120 "num_blocks": 1310720, 00:17:13.120 "uuid": "d386f341-a8a8-4766-85ff-92b3a867bb0e", 00:17:13.120 "numa_id": -1, 00:17:13.120 "assigned_rate_limits": { 00:17:13.120 "rw_ios_per_sec": 0, 00:17:13.120 "rw_mbytes_per_sec": 0, 00:17:13.120 "r_mbytes_per_sec": 0, 00:17:13.120 "w_mbytes_per_sec": 0 00:17:13.120 }, 00:17:13.120 "claimed": true, 00:17:13.120 "claim_type": "read_many_write_one", 00:17:13.120 "zoned": false, 00:17:13.120 "supported_io_types": { 00:17:13.120 "read": true, 00:17:13.120 "write": true, 00:17:13.120 "unmap": true, 00:17:13.120 "flush": true, 00:17:13.120 "reset": true, 00:17:13.120 "nvme_admin": true, 00:17:13.120 "nvme_io": true, 00:17:13.120 "nvme_io_md": false, 00:17:13.120 "write_zeroes": true, 00:17:13.120 "zcopy": false, 00:17:13.120 "get_zone_info": false, 00:17:13.120 "zone_management": false, 00:17:13.120 "zone_append": false, 00:17:13.120 "compare": true, 00:17:13.120 "compare_and_write": false, 00:17:13.120 "abort": true, 00:17:13.120 "seek_hole": false, 00:17:13.120 "seek_data": false, 00:17:13.120 "copy": true, 00:17:13.120 "nvme_iov_md": false 00:17:13.120 }, 00:17:13.120 "driver_specific": { 00:17:13.120 "nvme": [ 00:17:13.120 { 00:17:13.120 "pci_address": "0000:00:11.0", 00:17:13.120 "trid": { 00:17:13.120 "trtype": "PCIe", 00:17:13.120 "traddr": "0000:00:11.0" 00:17:13.120 }, 00:17:13.120 "ctrlr_data": { 00:17:13.120 "cntlid": 0, 00:17:13.120 "vendor_id": "0x1b36", 00:17:13.120 "model_number": "QEMU NVMe Ctrl", 00:17:13.120 "serial_number": "12341", 00:17:13.120 "firmware_revision": "8.0.0", 00:17:13.120 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:13.120 "oacs": { 00:17:13.120 "security": 0, 00:17:13.120 "format": 1, 00:17:13.120 "firmware": 0, 00:17:13.120 "ns_manage": 1 00:17:13.120 }, 00:17:13.120 "multi_ctrlr": false, 00:17:13.120 "ana_reporting": false 00:17:13.120 }, 00:17:13.120 "vs": { 00:17:13.120 "nvme_version": "1.4" 00:17:13.120 }, 00:17:13.120 "ns_data": { 00:17:13.120 "id": 1, 00:17:13.120 "can_share": false 00:17:13.120 } 00:17:13.120 } 00:17:13.120 ], 00:17:13.120 "mp_policy": "active_passive" 00:17:13.120 } 00:17:13.120 } 00:17:13.120 ]' 00:17:13.120 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:13.120 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:13.120 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:13.120 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:13.120 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:13.120 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:17:13.120 17:51:33 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:13.120 17:51:33 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:13.120 17:51:33 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:13.120 17:51:33 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:13.120 17:51:33 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:13.381 17:51:33 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=602d0787-0a24-4377-b57d-edd3497e8cba 00:17:13.381 17:51:33 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:13.381 17:51:33 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 602d0787-0a24-4377-b57d-edd3497e8cba 00:17:13.643 17:51:33 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:13.938 17:51:33 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=ecc95f44-c7d9-40a9-a475-a2feb3365049 00:17:13.938 17:51:33 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ecc95f44-c7d9-40a9-a475-a2feb3365049 00:17:13.938 17:51:33 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:13.938 17:51:33 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:13.938 17:51:33 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:13.938 17:51:33 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:13.938 17:51:33 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:13.938 17:51:33 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:14.199 17:51:33 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:14.199 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:14.199 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:14.199 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:14.199 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:14.199 17:51:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:14.199 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:14.199 { 00:17:14.199 "name": "19944047-96b0-4cf2-b4d4-f4b804b99e0d", 00:17:14.199 "aliases": [ 00:17:14.199 "lvs/nvme0n1p0" 00:17:14.199 ], 00:17:14.199 "product_name": "Logical Volume", 00:17:14.199 "block_size": 4096, 00:17:14.199 "num_blocks": 26476544, 00:17:14.199 "uuid": "19944047-96b0-4cf2-b4d4-f4b804b99e0d", 00:17:14.199 "assigned_rate_limits": { 00:17:14.199 "rw_ios_per_sec": 0, 00:17:14.199 "rw_mbytes_per_sec": 0, 00:17:14.199 "r_mbytes_per_sec": 0, 00:17:14.199 "w_mbytes_per_sec": 0 00:17:14.199 }, 00:17:14.199 "claimed": false, 00:17:14.199 "zoned": false, 00:17:14.199 "supported_io_types": { 00:17:14.199 "read": true, 00:17:14.199 "write": true, 00:17:14.199 "unmap": true, 00:17:14.199 "flush": false, 00:17:14.199 "reset": true, 00:17:14.199 "nvme_admin": false, 00:17:14.199 "nvme_io": false, 00:17:14.199 "nvme_io_md": false, 00:17:14.199 "write_zeroes": true, 00:17:14.199 "zcopy": false, 00:17:14.199 "get_zone_info": false, 00:17:14.199 "zone_management": false, 00:17:14.199 "zone_append": false, 00:17:14.199 "compare": false, 00:17:14.199 "compare_and_write": false, 00:17:14.199 "abort": false, 00:17:14.199 "seek_hole": true, 00:17:14.199 "seek_data": true, 00:17:14.199 "copy": false, 00:17:14.199 "nvme_iov_md": false 00:17:14.199 }, 00:17:14.199 "driver_specific": { 00:17:14.199 "lvol": { 00:17:14.199 "lvol_store_uuid": "ecc95f44-c7d9-40a9-a475-a2feb3365049", 00:17:14.199 "base_bdev": "nvme0n1", 00:17:14.199 "thin_provision": true, 00:17:14.199 "num_allocated_clusters": 0, 00:17:14.199 "snapshot": false, 00:17:14.199 "clone": false, 00:17:14.199 "esnap_clone": false 00:17:14.199 } 00:17:14.199 } 00:17:14.199 } 00:17:14.199 ]' 00:17:14.199 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:14.199 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:14.199 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:14.460 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:14.460 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:14.460 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:14.460 17:51:34 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:14.460 17:51:34 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:14.460 17:51:34 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:14.729 17:51:34 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:14.729 17:51:34 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:14.729 17:51:34 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:14.729 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:14.729 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:14.729 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:14.729 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:14.729 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:14.729 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:14.729 { 00:17:14.729 "name": "19944047-96b0-4cf2-b4d4-f4b804b99e0d", 00:17:14.729 "aliases": [ 00:17:14.729 "lvs/nvme0n1p0" 00:17:14.729 ], 00:17:14.729 "product_name": "Logical Volume", 00:17:14.729 "block_size": 4096, 00:17:14.729 "num_blocks": 26476544, 00:17:14.729 "uuid": "19944047-96b0-4cf2-b4d4-f4b804b99e0d", 00:17:14.729 "assigned_rate_limits": { 00:17:14.729 "rw_ios_per_sec": 0, 00:17:14.729 "rw_mbytes_per_sec": 0, 00:17:14.729 "r_mbytes_per_sec": 0, 00:17:14.729 "w_mbytes_per_sec": 0 00:17:14.729 }, 00:17:14.729 "claimed": false, 00:17:14.729 "zoned": false, 00:17:14.729 "supported_io_types": { 00:17:14.729 "read": true, 00:17:14.729 "write": true, 00:17:14.729 "unmap": true, 00:17:14.729 "flush": false, 00:17:14.729 "reset": true, 00:17:14.729 "nvme_admin": false, 00:17:14.729 "nvme_io": false, 00:17:14.729 "nvme_io_md": false, 00:17:14.729 "write_zeroes": true, 00:17:14.729 "zcopy": false, 00:17:14.729 "get_zone_info": false, 00:17:14.729 "zone_management": false, 00:17:14.729 "zone_append": false, 00:17:14.729 "compare": false, 00:17:14.729 "compare_and_write": false, 00:17:14.729 "abort": false, 00:17:14.729 "seek_hole": true, 00:17:14.729 "seek_data": true, 00:17:14.729 "copy": false, 00:17:14.729 "nvme_iov_md": false 00:17:14.729 }, 00:17:14.729 "driver_specific": { 00:17:14.729 "lvol": { 00:17:14.729 "lvol_store_uuid": "ecc95f44-c7d9-40a9-a475-a2feb3365049", 00:17:14.729 "base_bdev": "nvme0n1", 00:17:14.729 "thin_provision": true, 00:17:14.729 "num_allocated_clusters": 0, 00:17:14.729 "snapshot": false, 00:17:14.729 "clone": false, 00:17:14.729 "esnap_clone": false 00:17:14.729 } 00:17:14.729 } 00:17:14.729 } 00:17:14.729 ]' 00:17:14.729 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:14.730 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:14.730 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:15.009 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:15.009 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:15.009 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:15.009 17:51:34 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:15.009 17:51:34 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:15.009 17:51:34 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:15.009 17:51:34 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:15.009 17:51:34 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:15.009 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:15.009 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:15.009 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:15.009 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:15.009 17:51:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 19944047-96b0-4cf2-b4d4-f4b804b99e0d 00:17:15.270 17:51:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:15.270 { 00:17:15.270 "name": "19944047-96b0-4cf2-b4d4-f4b804b99e0d", 00:17:15.270 "aliases": [ 00:17:15.270 "lvs/nvme0n1p0" 00:17:15.270 ], 00:17:15.270 "product_name": "Logical Volume", 00:17:15.270 "block_size": 4096, 00:17:15.270 "num_blocks": 26476544, 00:17:15.270 "uuid": "19944047-96b0-4cf2-b4d4-f4b804b99e0d", 00:17:15.270 "assigned_rate_limits": { 00:17:15.270 "rw_ios_per_sec": 0, 00:17:15.270 "rw_mbytes_per_sec": 0, 00:17:15.270 "r_mbytes_per_sec": 0, 00:17:15.270 "w_mbytes_per_sec": 0 00:17:15.270 }, 00:17:15.270 "claimed": false, 00:17:15.270 "zoned": false, 00:17:15.270 "supported_io_types": { 00:17:15.270 "read": true, 00:17:15.270 "write": true, 00:17:15.270 "unmap": true, 00:17:15.270 "flush": false, 00:17:15.270 "reset": true, 00:17:15.270 "nvme_admin": false, 00:17:15.270 "nvme_io": false, 00:17:15.270 "nvme_io_md": false, 00:17:15.270 "write_zeroes": true, 00:17:15.270 "zcopy": false, 00:17:15.270 "get_zone_info": false, 00:17:15.270 "zone_management": false, 00:17:15.270 "zone_append": false, 00:17:15.270 "compare": false, 00:17:15.270 "compare_and_write": false, 00:17:15.270 "abort": false, 00:17:15.270 "seek_hole": true, 00:17:15.270 "seek_data": true, 00:17:15.270 "copy": false, 00:17:15.270 "nvme_iov_md": false 00:17:15.270 }, 00:17:15.270 "driver_specific": { 00:17:15.270 "lvol": { 00:17:15.270 "lvol_store_uuid": "ecc95f44-c7d9-40a9-a475-a2feb3365049", 00:17:15.270 "base_bdev": "nvme0n1", 00:17:15.270 "thin_provision": true, 00:17:15.270 "num_allocated_clusters": 0, 00:17:15.270 "snapshot": false, 00:17:15.270 "clone": false, 00:17:15.270 "esnap_clone": false 00:17:15.270 } 00:17:15.270 } 00:17:15.270 } 00:17:15.270 ]' 00:17:15.270 17:51:35 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:15.270 17:51:35 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:15.270 17:51:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:15.270 17:51:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:15.270 17:51:35 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:15.270 17:51:35 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:15.270 17:51:35 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:15.270 17:51:35 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 19944047-96b0-4cf2-b4d4-f4b804b99e0d -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:15.533 [2024-11-05 17:51:35.390613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.533 [2024-11-05 17:51:35.390664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:15.533 [2024-11-05 17:51:35.390679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:15.533 [2024-11-05 17:51:35.390689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.533 [2024-11-05 17:51:35.392838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.533 [2024-11-05 17:51:35.392996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:15.533 [2024-11-05 17:51:35.393014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:17:15.533 [2024-11-05 17:51:35.393021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.533 [2024-11-05 17:51:35.393418] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:15.533 [2024-11-05 17:51:35.393689] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:15.533 [2024-11-05 17:51:35.393726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.533 [2024-11-05 17:51:35.393734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:15.533 [2024-11-05 17:51:35.393745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:17:15.533 [2024-11-05 17:51:35.393752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.533 [2024-11-05 17:51:35.393884] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 01e86416-23e3-4012-8803-d38bc6f06433 00:17:15.533 [2024-11-05 17:51:35.395212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.533 [2024-11-05 17:51:35.395239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:15.533 [2024-11-05 17:51:35.395248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:15.533 [2024-11-05 17:51:35.395256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.533 [2024-11-05 17:51:35.402189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.533 [2024-11-05 17:51:35.402216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:15.533 [2024-11-05 17:51:35.402224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.860 ms 00:17:15.533 [2024-11-05 17:51:35.402234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.533 [2024-11-05 17:51:35.402344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.533 [2024-11-05 17:51:35.402358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:15.533 [2024-11-05 17:51:35.402376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:15.533 [2024-11-05 17:51:35.402386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.533 [2024-11-05 17:51:35.402426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.533 [2024-11-05 17:51:35.402434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:15.533 [2024-11-05 17:51:35.402441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:15.533 [2024-11-05 17:51:35.402449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.533 [2024-11-05 17:51:35.402478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:15.533 [2024-11-05 17:51:35.404123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.533 [2024-11-05 17:51:35.404147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:15.533 [2024-11-05 17:51:35.404168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:17:15.533 [2024-11-05 17:51:35.404175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.533 [2024-11-05 17:51:35.404225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.533 [2024-11-05 17:51:35.404232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:15.533 [2024-11-05 17:51:35.404242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:15.533 [2024-11-05 17:51:35.404249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.533 [2024-11-05 17:51:35.404281] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:15.533 [2024-11-05 17:51:35.404390] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:15.533 [2024-11-05 17:51:35.404404] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:15.533 [2024-11-05 17:51:35.404413] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:15.533 [2024-11-05 17:51:35.404423] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:15.534 [2024-11-05 17:51:35.404430] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:15.534 [2024-11-05 17:51:35.404438] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:15.534 [2024-11-05 17:51:35.404444] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:15.534 [2024-11-05 17:51:35.404452] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:15.534 [2024-11-05 17:51:35.404459] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:15.534 [2024-11-05 17:51:35.404467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.534 [2024-11-05 17:51:35.404473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:15.534 [2024-11-05 17:51:35.404480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:17:15.534 [2024-11-05 17:51:35.404485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.534 [2024-11-05 17:51:35.404563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.534 [2024-11-05 17:51:35.404573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:15.534 [2024-11-05 17:51:35.404580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:15.534 [2024-11-05 17:51:35.404585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.534 [2024-11-05 17:51:35.404682] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:15.534 [2024-11-05 17:51:35.404701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:15.534 [2024-11-05 17:51:35.404709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.534 [2024-11-05 17:51:35.404715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:15.534 [2024-11-05 17:51:35.404728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:15.534 [2024-11-05 17:51:35.404740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:15.534 [2024-11-05 17:51:35.404746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.534 [2024-11-05 17:51:35.404757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:15.534 [2024-11-05 17:51:35.404762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:15.534 [2024-11-05 17:51:35.404771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.534 [2024-11-05 17:51:35.404776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:15.534 [2024-11-05 17:51:35.404783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:15.534 [2024-11-05 17:51:35.404787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:15.534 [2024-11-05 17:51:35.404799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:15.534 [2024-11-05 17:51:35.404805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:15.534 [2024-11-05 17:51:35.404817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.534 [2024-11-05 17:51:35.404828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:15.534 [2024-11-05 17:51:35.404832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.534 [2024-11-05 17:51:35.404844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:15.534 [2024-11-05 17:51:35.404850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.534 [2024-11-05 17:51:35.404863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:15.534 [2024-11-05 17:51:35.404867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.534 [2024-11-05 17:51:35.404878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:15.534 [2024-11-05 17:51:35.404885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.534 [2024-11-05 17:51:35.404898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:15.534 [2024-11-05 17:51:35.404904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:15.534 [2024-11-05 17:51:35.404910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.534 [2024-11-05 17:51:35.404915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:15.534 [2024-11-05 17:51:35.404922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:15.534 [2024-11-05 17:51:35.404927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:15.534 [2024-11-05 17:51:35.404939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:15.534 [2024-11-05 17:51:35.404946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404951] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:15.534 [2024-11-05 17:51:35.404976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:15.534 [2024-11-05 17:51:35.404982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.534 [2024-11-05 17:51:35.404989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.534 [2024-11-05 17:51:35.404995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:15.534 [2024-11-05 17:51:35.405001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:15.534 [2024-11-05 17:51:35.405006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:15.534 [2024-11-05 17:51:35.405013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:15.534 [2024-11-05 17:51:35.405018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:15.534 [2024-11-05 17:51:35.405025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:15.534 [2024-11-05 17:51:35.405032] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:15.534 [2024-11-05 17:51:35.405041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.534 [2024-11-05 17:51:35.405056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:15.534 [2024-11-05 17:51:35.405078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:15.534 [2024-11-05 17:51:35.405084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:15.534 [2024-11-05 17:51:35.405091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:15.534 [2024-11-05 17:51:35.405097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:15.534 [2024-11-05 17:51:35.405106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:15.534 [2024-11-05 17:51:35.405112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:15.534 [2024-11-05 17:51:35.405118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:15.534 [2024-11-05 17:51:35.405124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:15.534 [2024-11-05 17:51:35.405131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:15.534 [2024-11-05 17:51:35.405138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:15.534 [2024-11-05 17:51:35.405146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:15.534 [2024-11-05 17:51:35.405152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:15.534 [2024-11-05 17:51:35.405159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:15.534 [2024-11-05 17:51:35.405164] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:15.534 [2024-11-05 17:51:35.405183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.534 [2024-11-05 17:51:35.405189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:15.534 [2024-11-05 17:51:35.405196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:15.534 [2024-11-05 17:51:35.405202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:15.534 [2024-11-05 17:51:35.405208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:15.534 [2024-11-05 17:51:35.405215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.534 [2024-11-05 17:51:35.405231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:15.534 [2024-11-05 17:51:35.405237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:17:15.534 [2024-11-05 17:51:35.405244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.534 [2024-11-05 17:51:35.405319] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:15.534 [2024-11-05 17:51:35.405329] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:18.847 [2024-11-05 17:51:38.262344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.847 [2024-11-05 17:51:38.262603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:18.847 [2024-11-05 17:51:38.262628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2857.011 ms 00:17:18.847 [2024-11-05 17:51:38.262640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.847 [2024-11-05 17:51:38.273963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.847 [2024-11-05 17:51:38.274134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.847 [2024-11-05 17:51:38.274155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.193 ms 00:17:18.847 [2024-11-05 17:51:38.274169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.847 [2024-11-05 17:51:38.274337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.847 [2024-11-05 17:51:38.274350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:18.847 [2024-11-05 17:51:38.274363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:18.847 [2024-11-05 17:51:38.274373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.847 [2024-11-05 17:51:38.293162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.847 [2024-11-05 17:51:38.293221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.847 [2024-11-05 17:51:38.293237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.753 ms 00:17:18.847 [2024-11-05 17:51:38.293250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.847 [2024-11-05 17:51:38.293365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.847 [2024-11-05 17:51:38.293388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.847 [2024-11-05 17:51:38.293400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:18.847 [2024-11-05 17:51:38.293413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.847 [2024-11-05 17:51:38.293869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.847 [2024-11-05 17:51:38.293901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.847 [2024-11-05 17:51:38.293913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:17:18.847 [2024-11-05 17:51:38.293928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.847 [2024-11-05 17:51:38.294109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.847 [2024-11-05 17:51:38.294123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.847 [2024-11-05 17:51:38.294154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:18.847 [2024-11-05 17:51:38.294166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.847 [2024-11-05 17:51:38.302002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.847 [2024-11-05 17:51:38.302185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.848 [2024-11-05 17:51:38.302205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.801 ms 00:17:18.848 [2024-11-05 17:51:38.302218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.311389] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:18.848 [2024-11-05 17:51:38.328886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.328922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:18.848 [2024-11-05 17:51:38.328936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.558 ms 00:17:18.848 [2024-11-05 17:51:38.328944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.388663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.388843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:18.848 [2024-11-05 17:51:38.388868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.637 ms 00:17:18.848 [2024-11-05 17:51:38.388887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.389127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.389140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:18.848 [2024-11-05 17:51:38.389151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:17:18.848 [2024-11-05 17:51:38.389160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.392366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.392400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:18.848 [2024-11-05 17:51:38.392412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.140 ms 00:17:18.848 [2024-11-05 17:51:38.392420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.395403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.395515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:18.848 [2024-11-05 17:51:38.395534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.928 ms 00:17:18.848 [2024-11-05 17:51:38.395542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.395862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.395878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:18.848 [2024-11-05 17:51:38.395891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:17:18.848 [2024-11-05 17:51:38.395899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.429168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.429299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:18.848 [2024-11-05 17:51:38.429320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.225 ms 00:17:18.848 [2024-11-05 17:51:38.429332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.433764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.433799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:18.848 [2024-11-05 17:51:38.433811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.331 ms 00:17:18.848 [2024-11-05 17:51:38.433820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.437028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.437155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:18.848 [2024-11-05 17:51:38.437174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:17:18.848 [2024-11-05 17:51:38.437182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.440793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.440916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:18.848 [2024-11-05 17:51:38.440938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.569 ms 00:17:18.848 [2024-11-05 17:51:38.440947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.441004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.441014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:18.848 [2024-11-05 17:51:38.441025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:18.848 [2024-11-05 17:51:38.441032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.441130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.848 [2024-11-05 17:51:38.441139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:18.848 [2024-11-05 17:51:38.441162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:18.848 [2024-11-05 17:51:38.441169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.848 [2024-11-05 17:51:38.442145] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.848 [2024-11-05 17:51:38.443196] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3051.165 ms, result 0 00:17:18.848 [2024-11-05 17:51:38.444046] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.848 { 00:17:18.848 "name": "ftl0", 00:17:18.848 "uuid": "01e86416-23e3-4012-8803-d38bc6f06433" 00:17:18.848 } 00:17:18.848 17:51:38 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:18.848 17:51:38 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:17:18.848 17:51:38 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:18.848 17:51:38 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:17:18.848 17:51:38 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:18.848 17:51:38 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:18.848 17:51:38 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:18.848 17:51:38 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:19.110 [ 00:17:19.110 { 00:17:19.110 "name": "ftl0", 00:17:19.110 "aliases": [ 00:17:19.110 "01e86416-23e3-4012-8803-d38bc6f06433" 00:17:19.110 ], 00:17:19.110 "product_name": "FTL disk", 00:17:19.110 "block_size": 4096, 00:17:19.110 "num_blocks": 23592960, 00:17:19.110 "uuid": "01e86416-23e3-4012-8803-d38bc6f06433", 00:17:19.110 "assigned_rate_limits": { 00:17:19.110 "rw_ios_per_sec": 0, 00:17:19.110 "rw_mbytes_per_sec": 0, 00:17:19.110 "r_mbytes_per_sec": 0, 00:17:19.110 "w_mbytes_per_sec": 0 00:17:19.110 }, 00:17:19.110 "claimed": false, 00:17:19.110 "zoned": false, 00:17:19.110 "supported_io_types": { 00:17:19.110 "read": true, 00:17:19.110 "write": true, 00:17:19.110 "unmap": true, 00:17:19.110 "flush": true, 00:17:19.110 "reset": false, 00:17:19.110 "nvme_admin": false, 00:17:19.110 "nvme_io": false, 00:17:19.110 "nvme_io_md": false, 00:17:19.110 "write_zeroes": true, 00:17:19.110 "zcopy": false, 00:17:19.110 "get_zone_info": false, 00:17:19.110 "zone_management": false, 00:17:19.110 "zone_append": false, 00:17:19.110 "compare": false, 00:17:19.110 "compare_and_write": false, 00:17:19.110 "abort": false, 00:17:19.110 "seek_hole": false, 00:17:19.110 "seek_data": false, 00:17:19.110 "copy": false, 00:17:19.110 "nvme_iov_md": false 00:17:19.110 }, 00:17:19.110 "driver_specific": { 00:17:19.110 "ftl": { 00:17:19.110 "base_bdev": "19944047-96b0-4cf2-b4d4-f4b804b99e0d", 00:17:19.110 "cache": "nvc0n1p0" 00:17:19.110 } 00:17:19.110 } 00:17:19.110 } 00:17:19.110 ] 00:17:19.110 17:51:38 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:17:19.110 17:51:38 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:19.110 17:51:38 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:19.372 17:51:39 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:19.372 17:51:39 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:19.372 17:51:39 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:19.372 { 00:17:19.372 "name": "ftl0", 00:17:19.372 "aliases": [ 00:17:19.372 "01e86416-23e3-4012-8803-d38bc6f06433" 00:17:19.372 ], 00:17:19.372 "product_name": "FTL disk", 00:17:19.372 "block_size": 4096, 00:17:19.372 "num_blocks": 23592960, 00:17:19.372 "uuid": "01e86416-23e3-4012-8803-d38bc6f06433", 00:17:19.372 "assigned_rate_limits": { 00:17:19.372 "rw_ios_per_sec": 0, 00:17:19.372 "rw_mbytes_per_sec": 0, 00:17:19.372 "r_mbytes_per_sec": 0, 00:17:19.372 "w_mbytes_per_sec": 0 00:17:19.372 }, 00:17:19.372 "claimed": false, 00:17:19.372 "zoned": false, 00:17:19.372 "supported_io_types": { 00:17:19.372 "read": true, 00:17:19.372 "write": true, 00:17:19.372 "unmap": true, 00:17:19.372 "flush": true, 00:17:19.372 "reset": false, 00:17:19.372 "nvme_admin": false, 00:17:19.372 "nvme_io": false, 00:17:19.372 "nvme_io_md": false, 00:17:19.372 "write_zeroes": true, 00:17:19.372 "zcopy": false, 00:17:19.372 "get_zone_info": false, 00:17:19.372 "zone_management": false, 00:17:19.372 "zone_append": false, 00:17:19.372 "compare": false, 00:17:19.372 "compare_and_write": false, 00:17:19.372 "abort": false, 00:17:19.372 "seek_hole": false, 00:17:19.372 "seek_data": false, 00:17:19.372 "copy": false, 00:17:19.372 "nvme_iov_md": false 00:17:19.372 }, 00:17:19.372 "driver_specific": { 00:17:19.372 "ftl": { 00:17:19.372 "base_bdev": "19944047-96b0-4cf2-b4d4-f4b804b99e0d", 00:17:19.372 "cache": "nvc0n1p0" 00:17:19.372 } 00:17:19.372 } 00:17:19.372 } 00:17:19.372 ]' 00:17:19.372 17:51:39 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:19.372 17:51:39 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:19.372 17:51:39 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:19.636 [2024-11-05 17:51:39.544141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.636 [2024-11-05 17:51:39.544214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:19.636 [2024-11-05 17:51:39.544229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:19.636 [2024-11-05 17:51:39.544242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.636 [2024-11-05 17:51:39.544279] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:19.636 [2024-11-05 17:51:39.544865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.636 [2024-11-05 17:51:39.544887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:19.636 [2024-11-05 17:51:39.544900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:17:19.636 [2024-11-05 17:51:39.544907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.636 [2024-11-05 17:51:39.545515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.636 [2024-11-05 17:51:39.545537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:19.636 [2024-11-05 17:51:39.545549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:17:19.636 [2024-11-05 17:51:39.545557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.636 [2024-11-05 17:51:39.549428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.636 [2024-11-05 17:51:39.549448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:19.636 [2024-11-05 17:51:39.549460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:17:19.636 [2024-11-05 17:51:39.549469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.636 [2024-11-05 17:51:39.556499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.637 [2024-11-05 17:51:39.556531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:19.637 [2024-11-05 17:51:39.556547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.975 ms 00:17:19.637 [2024-11-05 17:51:39.556556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.637 [2024-11-05 17:51:39.558488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.637 [2024-11-05 17:51:39.558540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:19.637 [2024-11-05 17:51:39.558551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.812 ms 00:17:19.637 [2024-11-05 17:51:39.558559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.637 [2024-11-05 17:51:39.562955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.637 [2024-11-05 17:51:39.562991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:19.637 [2024-11-05 17:51:39.563004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.347 ms 00:17:19.637 [2024-11-05 17:51:39.563013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.637 [2024-11-05 17:51:39.563243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.637 [2024-11-05 17:51:39.563260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:19.637 [2024-11-05 17:51:39.563271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:17:19.637 [2024-11-05 17:51:39.563279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.637 [2024-11-05 17:51:39.565980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.637 [2024-11-05 17:51:39.566012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:19.637 [2024-11-05 17:51:39.566026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.668 ms 00:17:19.637 [2024-11-05 17:51:39.566034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.637 [2024-11-05 17:51:39.567593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.637 [2024-11-05 17:51:39.567625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:19.637 [2024-11-05 17:51:39.567637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:17:19.637 [2024-11-05 17:51:39.567645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.637 [2024-11-05 17:51:39.569812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.637 [2024-11-05 17:51:39.569843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:19.637 [2024-11-05 17:51:39.569854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.118 ms 00:17:19.637 [2024-11-05 17:51:39.569862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.637 [2024-11-05 17:51:39.571486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.637 [2024-11-05 17:51:39.571517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:19.637 [2024-11-05 17:51:39.571528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:17:19.637 [2024-11-05 17:51:39.571535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.637 [2024-11-05 17:51:39.571586] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:19.637 [2024-11-05 17:51:39.571601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.571999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:19.637 [2024-11-05 17:51:39.572164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:19.638 [2024-11-05 17:51:39.572521] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:19.638 [2024-11-05 17:51:39.572532] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01e86416-23e3-4012-8803-d38bc6f06433 00:17:19.638 [2024-11-05 17:51:39.572540] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:19.638 [2024-11-05 17:51:39.572549] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:19.638 [2024-11-05 17:51:39.572559] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:19.638 [2024-11-05 17:51:39.572569] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:19.638 [2024-11-05 17:51:39.572577] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:19.638 [2024-11-05 17:51:39.572590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:19.638 [2024-11-05 17:51:39.572597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:19.638 [2024-11-05 17:51:39.572605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:19.638 [2024-11-05 17:51:39.572612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:19.638 [2024-11-05 17:51:39.572622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.638 [2024-11-05 17:51:39.572630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:19.638 [2024-11-05 17:51:39.572641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:17:19.638 [2024-11-05 17:51:39.572654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.574650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.638 [2024-11-05 17:51:39.574669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:19.638 [2024-11-05 17:51:39.574681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:17:19.638 [2024-11-05 17:51:39.574690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.574859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.638 [2024-11-05 17:51:39.574870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:19.638 [2024-11-05 17:51:39.574882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:19.638 [2024-11-05 17:51:39.574890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.581623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.638 [2024-11-05 17:51:39.581781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.638 [2024-11-05 17:51:39.581870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.638 [2024-11-05 17:51:39.581893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.582125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.638 [2024-11-05 17:51:39.582259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.638 [2024-11-05 17:51:39.582312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.638 [2024-11-05 17:51:39.582429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.582517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.638 [2024-11-05 17:51:39.582613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.638 [2024-11-05 17:51:39.582639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.638 [2024-11-05 17:51:39.582659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.582703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.638 [2024-11-05 17:51:39.582757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.638 [2024-11-05 17:51:39.582783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.638 [2024-11-05 17:51:39.582802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.595353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.638 [2024-11-05 17:51:39.595514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.638 [2024-11-05 17:51:39.595570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.638 [2024-11-05 17:51:39.595613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.605650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.638 [2024-11-05 17:51:39.605789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.638 [2024-11-05 17:51:39.605849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.638 [2024-11-05 17:51:39.605891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.605995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.638 [2024-11-05 17:51:39.606056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.638 [2024-11-05 17:51:39.606136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.638 [2024-11-05 17:51:39.606158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.606254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.638 [2024-11-05 17:51:39.606346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.638 [2024-11-05 17:51:39.606394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.638 [2024-11-05 17:51:39.606417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.606595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.638 [2024-11-05 17:51:39.606657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.638 [2024-11-05 17:51:39.606715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.638 [2024-11-05 17:51:39.606740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.638 [2024-11-05 17:51:39.606826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.638 [2024-11-05 17:51:39.606853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:19.638 [2024-11-05 17:51:39.606877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.639 [2024-11-05 17:51:39.606927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.639 [2024-11-05 17:51:39.607001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.639 [2024-11-05 17:51:39.607025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.639 [2024-11-05 17:51:39.607085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.639 [2024-11-05 17:51:39.607111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.639 [2024-11-05 17:51:39.607184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.639 [2024-11-05 17:51:39.607209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.639 [2024-11-05 17:51:39.607230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.639 [2024-11-05 17:51:39.607278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.639 [2024-11-05 17:51:39.607513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.331 ms, result 0 00:17:19.639 true 00:17:19.900 17:51:39 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 86234 00:17:19.900 17:51:39 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 86234 ']' 00:17:19.900 17:51:39 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 86234 00:17:19.900 17:51:39 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:19.900 17:51:39 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:19.900 17:51:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86234 00:17:19.900 17:51:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:19.900 17:51:39 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:19.900 17:51:39 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86234' 00:17:19.900 killing process with pid 86234 00:17:19.900 17:51:39 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 86234 00:17:19.900 17:51:39 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 86234 00:17:32.115 17:51:50 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:32.115 65536+0 records in 00:17:32.115 65536+0 records out 00:17:32.115 268435456 bytes (268 MB, 256 MiB) copied, 1.07626 s, 249 MB/s 00:17:32.115 17:51:51 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:32.115 [2024-11-05 17:51:51.334945] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:17:32.115 [2024-11-05 17:51:51.335084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86410 ] 00:17:32.115 [2024-11-05 17:51:51.462857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:32.115 [2024-11-05 17:51:51.493946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.115 [2024-11-05 17:51:51.513393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.115 [2024-11-05 17:51:51.601483] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:32.115 [2024-11-05 17:51:51.601541] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:32.115 [2024-11-05 17:51:51.759121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.115 [2024-11-05 17:51:51.759183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:32.115 [2024-11-05 17:51:51.759200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.115 [2024-11-05 17:51:51.759208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.115 [2024-11-05 17:51:51.761596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.115 [2024-11-05 17:51:51.761639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.115 [2024-11-05 17:51:51.761649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.368 ms 00:17:32.115 [2024-11-05 17:51:51.761657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.115 [2024-11-05 17:51:51.761744] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:32.115 [2024-11-05 17:51:51.761979] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:32.115 [2024-11-05 17:51:51.761997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.115 [2024-11-05 17:51:51.762007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.115 [2024-11-05 17:51:51.762016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:32.115 [2024-11-05 17:51:51.762024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.115 [2024-11-05 17:51:51.763646] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:32.115 [2024-11-05 17:51:51.766190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.115 [2024-11-05 17:51:51.766233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:32.115 [2024-11-05 17:51:51.766244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.547 ms 00:17:32.115 [2024-11-05 17:51:51.766252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.115 [2024-11-05 17:51:51.766322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.115 [2024-11-05 17:51:51.766333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:32.115 [2024-11-05 17:51:51.766343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:32.115 [2024-11-05 17:51:51.766355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.115 [2024-11-05 17:51:51.771690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.115 [2024-11-05 17:51:51.771724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.115 [2024-11-05 17:51:51.771735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.286 ms 00:17:32.115 [2024-11-05 17:51:51.771747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.116 [2024-11-05 17:51:51.771861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.116 [2024-11-05 17:51:51.771873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.116 [2024-11-05 17:51:51.771883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:32.116 [2024-11-05 17:51:51.771890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.116 [2024-11-05 17:51:51.771921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.116 [2024-11-05 17:51:51.771931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:32.116 [2024-11-05 17:51:51.771938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:32.116 [2024-11-05 17:51:51.771946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.116 [2024-11-05 17:51:51.771969] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:32.116 [2024-11-05 17:51:51.773434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.116 [2024-11-05 17:51:51.773461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.116 [2024-11-05 17:51:51.773471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:17:32.116 [2024-11-05 17:51:51.773488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.116 [2024-11-05 17:51:51.773538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.116 [2024-11-05 17:51:51.773548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:32.116 [2024-11-05 17:51:51.773558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:32.116 [2024-11-05 17:51:51.773567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.116 [2024-11-05 17:51:51.773585] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:32.116 [2024-11-05 17:51:51.773607] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:32.116 [2024-11-05 17:51:51.773644] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:32.116 [2024-11-05 17:51:51.773663] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:32.116 [2024-11-05 17:51:51.773767] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:32.116 [2024-11-05 17:51:51.773780] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:32.116 [2024-11-05 17:51:51.773795] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:32.116 [2024-11-05 17:51:51.773805] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:32.116 [2024-11-05 17:51:51.773816] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:32.116 [2024-11-05 17:51:51.773825] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:32.116 [2024-11-05 17:51:51.773834] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:32.116 [2024-11-05 17:51:51.773845] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:32.116 [2024-11-05 17:51:51.773857] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:32.116 [2024-11-05 17:51:51.773867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.116 [2024-11-05 17:51:51.773879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:32.116 [2024-11-05 17:51:51.773888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:17:32.116 [2024-11-05 17:51:51.773897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.116 [2024-11-05 17:51:51.773996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.116 [2024-11-05 17:51:51.774007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:32.116 [2024-11-05 17:51:51.774015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:32.116 [2024-11-05 17:51:51.774023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.116 [2024-11-05 17:51:51.774139] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:32.116 [2024-11-05 17:51:51.774155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:32.116 [2024-11-05 17:51:51.774164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.116 [2024-11-05 17:51:51.774178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:32.116 [2024-11-05 17:51:51.774199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:32.116 [2024-11-05 17:51:51.774228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:32.116 [2024-11-05 17:51:51.774236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.116 [2024-11-05 17:51:51.774253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:32.116 [2024-11-05 17:51:51.774262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:32.116 [2024-11-05 17:51:51.774269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.116 [2024-11-05 17:51:51.774277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:32.116 [2024-11-05 17:51:51.774285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:32.116 [2024-11-05 17:51:51.774292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:32.116 [2024-11-05 17:51:51.774309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:32.116 [2024-11-05 17:51:51.774316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:32.116 [2024-11-05 17:51:51.774332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.116 [2024-11-05 17:51:51.774352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:32.116 [2024-11-05 17:51:51.774362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.116 [2024-11-05 17:51:51.774377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:32.116 [2024-11-05 17:51:51.774386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.116 [2024-11-05 17:51:51.774402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:32.116 [2024-11-05 17:51:51.774409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.116 [2024-11-05 17:51:51.774425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:32.116 [2024-11-05 17:51:51.774432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.116 [2024-11-05 17:51:51.774449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:32.116 [2024-11-05 17:51:51.774456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:32.116 [2024-11-05 17:51:51.774464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.116 [2024-11-05 17:51:51.774472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:32.116 [2024-11-05 17:51:51.774481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:32.116 [2024-11-05 17:51:51.774488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:32.116 [2024-11-05 17:51:51.774504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:32.116 [2024-11-05 17:51:51.774511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774519] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:32.116 [2024-11-05 17:51:51.774527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:32.116 [2024-11-05 17:51:51.774539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.116 [2024-11-05 17:51:51.774555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.116 [2024-11-05 17:51:51.774564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:32.116 [2024-11-05 17:51:51.774577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:32.116 [2024-11-05 17:51:51.774585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:32.116 [2024-11-05 17:51:51.774597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:32.116 [2024-11-05 17:51:51.774604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:32.116 [2024-11-05 17:51:51.774618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:32.116 [2024-11-05 17:51:51.774631] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:32.116 [2024-11-05 17:51:51.774649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.116 [2024-11-05 17:51:51.774664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:32.116 [2024-11-05 17:51:51.774674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:32.116 [2024-11-05 17:51:51.774688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:32.116 [2024-11-05 17:51:51.774696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:32.116 [2024-11-05 17:51:51.774708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:32.116 [2024-11-05 17:51:51.774716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:32.116 [2024-11-05 17:51:51.774728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:32.116 [2024-11-05 17:51:51.774736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:32.116 [2024-11-05 17:51:51.774744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:32.117 [2024-11-05 17:51:51.774752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:32.117 [2024-11-05 17:51:51.774761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:32.117 [2024-11-05 17:51:51.774768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:32.117 [2024-11-05 17:51:51.774776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:32.117 [2024-11-05 17:51:51.774786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:32.117 [2024-11-05 17:51:51.774794] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:32.117 [2024-11-05 17:51:51.774806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.117 [2024-11-05 17:51:51.774823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:32.117 [2024-11-05 17:51:51.774833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:32.117 [2024-11-05 17:51:51.774841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:32.117 [2024-11-05 17:51:51.774850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:32.117 [2024-11-05 17:51:51.774859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.774867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:32.117 [2024-11-05 17:51:51.774878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:17:32.117 [2024-11-05 17:51:51.774886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.784477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.784516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.117 [2024-11-05 17:51:51.784528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.539 ms 00:17:32.117 [2024-11-05 17:51:51.784537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.784675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.784691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:32.117 [2024-11-05 17:51:51.784700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:32.117 [2024-11-05 17:51:51.784710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.802528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.802576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.117 [2024-11-05 17:51:51.802589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.795 ms 00:17:32.117 [2024-11-05 17:51:51.802602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.802707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.802719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.117 [2024-11-05 17:51:51.802729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.117 [2024-11-05 17:51:51.802743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.803161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.803180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.117 [2024-11-05 17:51:51.803192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:17:32.117 [2024-11-05 17:51:51.803201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.803340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.803360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.117 [2024-11-05 17:51:51.803370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:32.117 [2024-11-05 17:51:51.803378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.810161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.810213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.117 [2024-11-05 17:51:51.810227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.755 ms 00:17:32.117 [2024-11-05 17:51:51.810238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.813343] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:32.117 [2024-11-05 17:51:51.813563] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:32.117 [2024-11-05 17:51:51.813585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.813598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:32.117 [2024-11-05 17:51:51.813612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:17:32.117 [2024-11-05 17:51:51.813624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.828755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.828950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:32.117 [2024-11-05 17:51:51.828968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.043 ms 00:17:32.117 [2024-11-05 17:51:51.828977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.831645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.831695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:32.117 [2024-11-05 17:51:51.831713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.547 ms 00:17:32.117 [2024-11-05 17:51:51.831726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.833888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.833924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:32.117 [2024-11-05 17:51:51.833933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.096 ms 00:17:32.117 [2024-11-05 17:51:51.833941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.834300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.834321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:32.117 [2024-11-05 17:51:51.834333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:17:32.117 [2024-11-05 17:51:51.834340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.852503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.852575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:32.117 [2024-11-05 17:51:51.852589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.133 ms 00:17:32.117 [2024-11-05 17:51:51.852598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.860474] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:32.117 [2024-11-05 17:51:51.875353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.875560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:32.117 [2024-11-05 17:51:51.875578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.616 ms 00:17:32.117 [2024-11-05 17:51:51.875589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.875690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.875702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:32.117 [2024-11-05 17:51:51.875711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:32.117 [2024-11-05 17:51:51.875722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.875770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.875779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:32.117 [2024-11-05 17:51:51.875787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:32.117 [2024-11-05 17:51:51.875801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.875822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.875830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:32.117 [2024-11-05 17:51:51.875839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:32.117 [2024-11-05 17:51:51.875847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.875884] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:32.117 [2024-11-05 17:51:51.875893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.875901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:32.117 [2024-11-05 17:51:51.875909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:32.117 [2024-11-05 17:51:51.875916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.879547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.879584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:32.117 [2024-11-05 17:51:51.879594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.612 ms 00:17:32.117 [2024-11-05 17:51:51.879603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.879696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.117 [2024-11-05 17:51:51.879707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:32.117 [2024-11-05 17:51:51.879716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:32.117 [2024-11-05 17:51:51.879725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.117 [2024-11-05 17:51:51.880618] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.117 [2024-11-05 17:51:51.881624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.263 ms, result 0 00:17:32.117 [2024-11-05 17:51:51.882139] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.117 [2024-11-05 17:51:51.892040] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:33.050  [2024-11-05T17:51:53.973Z] Copying: 36/256 [MB] (36 MBps) [2024-11-05T17:51:54.906Z] Copying: 57/256 [MB] (21 MBps) [2024-11-05T17:51:56.281Z] Copying: 84/256 [MB] (27 MBps) [2024-11-05T17:51:57.215Z] Copying: 106/256 [MB] (22 MBps) [2024-11-05T17:51:58.148Z] Copying: 132/256 [MB] (25 MBps) [2024-11-05T17:51:59.080Z] Copying: 162/256 [MB] (30 MBps) [2024-11-05T17:52:00.013Z] Copying: 195/256 [MB] (32 MBps) [2024-11-05T17:52:00.948Z] Copying: 224/256 [MB] (28 MBps) [2024-11-05T17:52:00.948Z] Copying: 256/256 [MB] (average 28 MBps)[2024-11-05 17:52:00.817521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:40.957 [2024-11-05 17:52:00.818685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-11-05 17:52:00.818717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:40.957 [2024-11-05 17:52:00.818729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:40.957 [2024-11-05 17:52:00.818738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-11-05 17:52:00.818758] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:40.957 [2024-11-05 17:52:00.819217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-11-05 17:52:00.819240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:40.957 [2024-11-05 17:52:00.819255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:17:40.957 [2024-11-05 17:52:00.819262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-11-05 17:52:00.820692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-11-05 17:52:00.820716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:40.957 [2024-11-05 17:52:00.820726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:17:40.957 [2024-11-05 17:52:00.820739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-11-05 17:52:00.826938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-11-05 17:52:00.826966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:40.957 [2024-11-05 17:52:00.826982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.184 ms 00:17:40.957 [2024-11-05 17:52:00.826989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-11-05 17:52:00.833925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.957 [2024-11-05 17:52:00.833953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:40.957 [2024-11-05 17:52:00.833963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.903 ms 00:17:40.957 [2024-11-05 17:52:00.833971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.957 [2024-11-05 17:52:00.835630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.958 [2024-11-05 17:52:00.835659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:40.958 [2024-11-05 17:52:00.835668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:17:40.958 [2024-11-05 17:52:00.835683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.958 [2024-11-05 17:52:00.839038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.958 [2024-11-05 17:52:00.839094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:40.958 [2024-11-05 17:52:00.839104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:17:40.958 [2024-11-05 17:52:00.839113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.958 [2024-11-05 17:52:00.839238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.958 [2024-11-05 17:52:00.839260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:40.958 [2024-11-05 17:52:00.839274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:40.958 [2024-11-05 17:52:00.839284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.958 [2024-11-05 17:52:00.841059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.958 [2024-11-05 17:52:00.841098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:40.958 [2024-11-05 17:52:00.841114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.748 ms 00:17:40.958 [2024-11-05 17:52:00.841129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.958 [2024-11-05 17:52:00.842706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.958 [2024-11-05 17:52:00.842734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:40.958 [2024-11-05 17:52:00.842743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:17:40.958 [2024-11-05 17:52:00.842751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.958 [2024-11-05 17:52:00.843892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.958 [2024-11-05 17:52:00.843919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:40.958 [2024-11-05 17:52:00.843929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:17:40.958 [2024-11-05 17:52:00.843937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.958 [2024-11-05 17:52:00.845002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.958 [2024-11-05 17:52:00.845028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:40.958 [2024-11-05 17:52:00.845036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:17:40.958 [2024-11-05 17:52:00.845044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.958 [2024-11-05 17:52:00.845082] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:40.958 [2024-11-05 17:52:00.845096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:40.958 [2024-11-05 17:52:00.845591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:40.959 [2024-11-05 17:52:00.845841] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:40.959 [2024-11-05 17:52:00.845848] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01e86416-23e3-4012-8803-d38bc6f06433 00:17:40.959 [2024-11-05 17:52:00.845863] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:40.959 [2024-11-05 17:52:00.845870] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:40.959 [2024-11-05 17:52:00.845877] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:40.959 [2024-11-05 17:52:00.845885] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:40.959 [2024-11-05 17:52:00.845892] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:40.959 [2024-11-05 17:52:00.845900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:40.959 [2024-11-05 17:52:00.845907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:40.959 [2024-11-05 17:52:00.845913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:40.959 [2024-11-05 17:52:00.845920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:40.959 [2024-11-05 17:52:00.845926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.959 [2024-11-05 17:52:00.845936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:40.959 [2024-11-05 17:52:00.845944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:17:40.959 [2024-11-05 17:52:00.845951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.847391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.959 [2024-11-05 17:52:00.847407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:40.959 [2024-11-05 17:52:00.847417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:17:40.959 [2024-11-05 17:52:00.847425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.847503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.959 [2024-11-05 17:52:00.847511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:40.959 [2024-11-05 17:52:00.847520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:40.959 [2024-11-05 17:52:00.847528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.852683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.852715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.959 [2024-11-05 17:52:00.852725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.852733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.852801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.852810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.959 [2024-11-05 17:52:00.852817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.852824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.852861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.852869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.959 [2024-11-05 17:52:00.852876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.852883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.852900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.852913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.959 [2024-11-05 17:52:00.852920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.852930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.861676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.861720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.959 [2024-11-05 17:52:00.861730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.861737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.868694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.868738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.959 [2024-11-05 17:52:00.868749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.868756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.868787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.868796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.959 [2024-11-05 17:52:00.868803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.868811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.868838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.868846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.959 [2024-11-05 17:52:00.868859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.868866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.868932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.868941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.959 [2024-11-05 17:52:00.868949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.868956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.868984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.868992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:40.959 [2024-11-05 17:52:00.869000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.869010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.869045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.869053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.959 [2024-11-05 17:52:00.869060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.869085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.959 [2024-11-05 17:52:00.869126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.959 [2024-11-05 17:52:00.869135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.959 [2024-11-05 17:52:00.869145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.959 [2024-11-05 17:52:00.869156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.960 [2024-11-05 17:52:00.869280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.584 ms, result 0 00:17:41.524 00:17:41.524 00:17:41.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.524 17:52:01 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=86522 00:17:41.524 17:52:01 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:41.524 17:52:01 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 86522 00:17:41.524 17:52:01 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 86522 ']' 00:17:41.524 17:52:01 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.524 17:52:01 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:41.524 17:52:01 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.524 17:52:01 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:41.524 17:52:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:41.524 [2024-11-05 17:52:01.366223] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:17:41.524 [2024-11-05 17:52:01.366341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86522 ] 00:17:41.524 [2024-11-05 17:52:01.493990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:41.783 [2024-11-05 17:52:01.527057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.783 [2024-11-05 17:52:01.545560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.350 17:52:02 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:42.350 17:52:02 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:42.350 17:52:02 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:42.611 [2024-11-05 17:52:02.416414] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.611 [2024-11-05 17:52:02.416483] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.611 [2024-11-05 17:52:02.589864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.611 [2024-11-05 17:52:02.589917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:42.611 [2024-11-05 17:52:02.589933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:42.611 [2024-11-05 17:52:02.589942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.611 [2024-11-05 17:52:02.592602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.611 [2024-11-05 17:52:02.592646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.611 [2024-11-05 17:52:02.592659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.640 ms 00:17:42.611 [2024-11-05 17:52:02.592667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.611 [2024-11-05 17:52:02.592822] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:42.611 [2024-11-05 17:52:02.593098] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:42.611 [2024-11-05 17:52:02.593130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.611 [2024-11-05 17:52:02.593138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.611 [2024-11-05 17:52:02.593149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:17:42.611 [2024-11-05 17:52:02.593158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.611 [2024-11-05 17:52:02.594467] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:42.611 [2024-11-05 17:52:02.596945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.611 [2024-11-05 17:52:02.596988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:42.611 [2024-11-05 17:52:02.596997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:17:42.611 [2024-11-05 17:52:02.597007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.611 [2024-11-05 17:52:02.597084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.611 [2024-11-05 17:52:02.597099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:42.611 [2024-11-05 17:52:02.597108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:42.611 [2024-11-05 17:52:02.597117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.611 [2024-11-05 17:52:02.602281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.611 [2024-11-05 17:52:02.602321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.611 [2024-11-05 17:52:02.602330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.116 ms 00:17:42.611 [2024-11-05 17:52:02.602339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.611 [2024-11-05 17:52:02.602429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.611 [2024-11-05 17:52:02.602445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.611 [2024-11-05 17:52:02.602454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:42.611 [2024-11-05 17:52:02.602465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.611 [2024-11-05 17:52:02.602492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.611 [2024-11-05 17:52:02.602504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:42.611 [2024-11-05 17:52:02.602512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:42.611 [2024-11-05 17:52:02.602523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.611 [2024-11-05 17:52:02.602545] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:42.878 [2024-11-05 17:52:02.603960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.878 [2024-11-05 17:52:02.603990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.878 [2024-11-05 17:52:02.604003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:17:42.878 [2024-11-05 17:52:02.604015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.878 [2024-11-05 17:52:02.604084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.878 [2024-11-05 17:52:02.604095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:42.878 [2024-11-05 17:52:02.604106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:42.878 [2024-11-05 17:52:02.604114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.878 [2024-11-05 17:52:02.604136] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:42.878 [2024-11-05 17:52:02.604154] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:42.878 [2024-11-05 17:52:02.604195] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:42.878 [2024-11-05 17:52:02.604216] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:42.878 [2024-11-05 17:52:02.604323] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:42.878 [2024-11-05 17:52:02.604334] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:42.878 [2024-11-05 17:52:02.604347] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:42.878 [2024-11-05 17:52:02.604358] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:42.878 [2024-11-05 17:52:02.604371] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:42.878 [2024-11-05 17:52:02.604381] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:42.878 [2024-11-05 17:52:02.604394] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:42.878 [2024-11-05 17:52:02.604402] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:42.878 [2024-11-05 17:52:02.604413] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:42.878 [2024-11-05 17:52:02.604422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.878 [2024-11-05 17:52:02.604434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:42.878 [2024-11-05 17:52:02.604441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:42.878 [2024-11-05 17:52:02.604450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.878 [2024-11-05 17:52:02.604535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.878 [2024-11-05 17:52:02.604555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:42.878 [2024-11-05 17:52:02.604563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:42.878 [2024-11-05 17:52:02.604572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.878 [2024-11-05 17:52:02.604673] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:42.878 [2024-11-05 17:52:02.604684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:42.878 [2024-11-05 17:52:02.604692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.878 [2024-11-05 17:52:02.604704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.878 [2024-11-05 17:52:02.604712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:42.878 [2024-11-05 17:52:02.604720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:42.878 [2024-11-05 17:52:02.604727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:42.878 [2024-11-05 17:52:02.604735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:42.878 [2024-11-05 17:52:02.604742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:42.878 [2024-11-05 17:52:02.604750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.878 [2024-11-05 17:52:02.604762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:42.878 [2024-11-05 17:52:02.604770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:42.878 [2024-11-05 17:52:02.604776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.878 [2024-11-05 17:52:02.604788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:42.878 [2024-11-05 17:52:02.604799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:42.878 [2024-11-05 17:52:02.604812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.878 [2024-11-05 17:52:02.604822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:42.878 [2024-11-05 17:52:02.604836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:42.878 [2024-11-05 17:52:02.604847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.878 [2024-11-05 17:52:02.604862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:42.878 [2024-11-05 17:52:02.604873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:42.878 [2024-11-05 17:52:02.604886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.878 [2024-11-05 17:52:02.604897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:42.878 [2024-11-05 17:52:02.604910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:42.878 [2024-11-05 17:52:02.604920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.878 [2024-11-05 17:52:02.604933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:42.878 [2024-11-05 17:52:02.604943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:42.878 [2024-11-05 17:52:02.604957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.878 [2024-11-05 17:52:02.604966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:42.878 [2024-11-05 17:52:02.604978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:42.878 [2024-11-05 17:52:02.604987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.878 [2024-11-05 17:52:02.604998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:42.878 [2024-11-05 17:52:02.605008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:42.878 [2024-11-05 17:52:02.605026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.878 [2024-11-05 17:52:02.605041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:42.878 [2024-11-05 17:52:02.605061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:42.878 [2024-11-05 17:52:02.605091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.878 [2024-11-05 17:52:02.605116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:42.878 [2024-11-05 17:52:02.605129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:42.878 [2024-11-05 17:52:02.605147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.878 [2024-11-05 17:52:02.605157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:42.878 [2024-11-05 17:52:02.605175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:42.878 [2024-11-05 17:52:02.605185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.878 [2024-11-05 17:52:02.605197] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:42.878 [2024-11-05 17:52:02.605208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:42.878 [2024-11-05 17:52:02.605223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.878 [2024-11-05 17:52:02.605235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.878 [2024-11-05 17:52:02.605249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:42.878 [2024-11-05 17:52:02.605260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:42.878 [2024-11-05 17:52:02.605274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:42.878 [2024-11-05 17:52:02.605285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:42.878 [2024-11-05 17:52:02.605300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:42.878 [2024-11-05 17:52:02.605312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:42.878 [2024-11-05 17:52:02.605329] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:42.878 [2024-11-05 17:52:02.605343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.878 [2024-11-05 17:52:02.605361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:42.879 [2024-11-05 17:52:02.605372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:42.879 [2024-11-05 17:52:02.605385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:42.879 [2024-11-05 17:52:02.605400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:42.879 [2024-11-05 17:52:02.605418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:42.879 [2024-11-05 17:52:02.605437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:42.879 [2024-11-05 17:52:02.605450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:42.879 [2024-11-05 17:52:02.605468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:42.879 [2024-11-05 17:52:02.605482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:42.879 [2024-11-05 17:52:02.605493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:42.879 [2024-11-05 17:52:02.605501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:42.879 [2024-11-05 17:52:02.605508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:42.879 [2024-11-05 17:52:02.605519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:42.879 [2024-11-05 17:52:02.605527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:42.879 [2024-11-05 17:52:02.605535] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:42.879 [2024-11-05 17:52:02.605544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.879 [2024-11-05 17:52:02.605554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:42.879 [2024-11-05 17:52:02.605561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:42.879 [2024-11-05 17:52:02.605570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:42.879 [2024-11-05 17:52:02.605578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:42.879 [2024-11-05 17:52:02.605588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.605595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:42.879 [2024-11-05 17:52:02.605610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:17:42.879 [2024-11-05 17:52:02.605617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.615228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.615263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.879 [2024-11-05 17:52:02.615275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.550 ms 00:17:42.879 [2024-11-05 17:52:02.615283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.615407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.615417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:42.879 [2024-11-05 17:52:02.615426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:42.879 [2024-11-05 17:52:02.615435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.624280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.624316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.879 [2024-11-05 17:52:02.624328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.822 ms 00:17:42.879 [2024-11-05 17:52:02.624338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.624385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.624394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.879 [2024-11-05 17:52:02.624404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:42.879 [2024-11-05 17:52:02.624411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.624750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.624773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.879 [2024-11-05 17:52:02.624784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:17:42.879 [2024-11-05 17:52:02.624793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.624930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.624939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.879 [2024-11-05 17:52:02.624949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:42.879 [2024-11-05 17:52:02.624957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.630469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.630498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.879 [2024-11-05 17:52:02.630509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.488 ms 00:17:42.879 [2024-11-05 17:52:02.630516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.633323] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:42.879 [2024-11-05 17:52:02.633357] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:42.879 [2024-11-05 17:52:02.633370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.633379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:42.879 [2024-11-05 17:52:02.633389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.748 ms 00:17:42.879 [2024-11-05 17:52:02.633396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.648178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.648209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:42.879 [2024-11-05 17:52:02.648225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.737 ms 00:17:42.879 [2024-11-05 17:52:02.648233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.650342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.650374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:42.879 [2024-11-05 17:52:02.650384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.034 ms 00:17:42.879 [2024-11-05 17:52:02.650391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.652058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.652100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:42.879 [2024-11-05 17:52:02.652111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:17:42.879 [2024-11-05 17:52:02.652119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.652439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.652451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:42.879 [2024-11-05 17:52:02.652465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:42.879 [2024-11-05 17:52:02.652474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.684277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.684333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:42.879 [2024-11-05 17:52:02.684352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.762 ms 00:17:42.879 [2024-11-05 17:52:02.684365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.692008] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:42.879 [2024-11-05 17:52:02.706325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.706368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:42.879 [2024-11-05 17:52:02.706381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.862 ms 00:17:42.879 [2024-11-05 17:52:02.706392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.706469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.706483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:42.879 [2024-11-05 17:52:02.706492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:42.879 [2024-11-05 17:52:02.706502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.706552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.706568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:42.879 [2024-11-05 17:52:02.706576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:42.879 [2024-11-05 17:52:02.706585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.706611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.706626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:42.879 [2024-11-05 17:52:02.706636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:42.879 [2024-11-05 17:52:02.706645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.706676] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:42.879 [2024-11-05 17:52:02.706688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.706695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:42.879 [2024-11-05 17:52:02.706704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:42.879 [2024-11-05 17:52:02.706711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.879 [2024-11-05 17:52:02.710434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.879 [2024-11-05 17:52:02.710474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:42.879 [2024-11-05 17:52:02.710486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:17:42.879 [2024-11-05 17:52:02.710496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.880 [2024-11-05 17:52:02.710594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.880 [2024-11-05 17:52:02.710605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:42.880 [2024-11-05 17:52:02.710615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:42.880 [2024-11-05 17:52:02.710622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.880 [2024-11-05 17:52:02.711448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:42.880 [2024-11-05 17:52:02.712441] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.303 ms, result 0 00:17:42.880 [2024-11-05 17:52:02.714240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:42.880 Some configs were skipped because the RPC state that can call them passed over. 00:17:42.880 17:52:02 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:43.138 [2024-11-05 17:52:02.944944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.138 [2024-11-05 17:52:02.945009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:43.138 [2024-11-05 17:52:02.945022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.592 ms 00:17:43.138 [2024-11-05 17:52:02.945032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.138 [2024-11-05 17:52:02.945075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.718 ms, result 0 00:17:43.138 true 00:17:43.138 17:52:02 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:43.401 [2024-11-05 17:52:03.154375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.401 [2024-11-05 17:52:03.154425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:43.401 [2024-11-05 17:52:03.154438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.765 ms 00:17:43.401 [2024-11-05 17:52:03.154445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.401 [2024-11-05 17:52:03.154482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.876 ms, result 0 00:17:43.401 true 00:17:43.401 17:52:03 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 86522 00:17:43.401 17:52:03 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 86522 ']' 00:17:43.401 17:52:03 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 86522 00:17:43.401 17:52:03 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:43.401 17:52:03 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:43.401 17:52:03 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86522 00:17:43.401 17:52:03 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:43.401 17:52:03 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:43.401 killing process with pid 86522 00:17:43.401 17:52:03 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86522' 00:17:43.401 17:52:03 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 86522 00:17:43.401 17:52:03 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 86522 00:17:43.401 [2024-11-05 17:52:03.311485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.401 [2024-11-05 17:52:03.311541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:43.401 [2024-11-05 17:52:03.311555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:43.401 [2024-11-05 17:52:03.311567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.401 [2024-11-05 17:52:03.311591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:43.401 [2024-11-05 17:52:03.312169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.401 [2024-11-05 17:52:03.312200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:43.401 [2024-11-05 17:52:03.312215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:17:43.401 [2024-11-05 17:52:03.312229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.401 [2024-11-05 17:52:03.312520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.401 [2024-11-05 17:52:03.312531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:43.401 [2024-11-05 17:52:03.312543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:43.401 [2024-11-05 17:52:03.312552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.401 [2024-11-05 17:52:03.316857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.401 [2024-11-05 17:52:03.316891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:43.401 [2024-11-05 17:52:03.316904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:17:43.401 [2024-11-05 17:52:03.316914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.402 [2024-11-05 17:52:03.323989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.402 [2024-11-05 17:52:03.324033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:43.402 [2024-11-05 17:52:03.324048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.035 ms 00:17:43.402 [2024-11-05 17:52:03.324057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.402 [2024-11-05 17:52:03.327103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.402 [2024-11-05 17:52:03.327140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:43.402 [2024-11-05 17:52:03.327151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.964 ms 00:17:43.402 [2024-11-05 17:52:03.327158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.402 [2024-11-05 17:52:03.332080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.402 [2024-11-05 17:52:03.332120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:43.402 [2024-11-05 17:52:03.332132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.868 ms 00:17:43.402 [2024-11-05 17:52:03.332143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.402 [2024-11-05 17:52:03.332341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.402 [2024-11-05 17:52:03.332356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:43.402 [2024-11-05 17:52:03.332367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:43.402 [2024-11-05 17:52:03.332375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.402 [2024-11-05 17:52:03.335417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.402 [2024-11-05 17:52:03.335456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:43.402 [2024-11-05 17:52:03.335471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.019 ms 00:17:43.402 [2024-11-05 17:52:03.335478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.402 [2024-11-05 17:52:03.338302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.402 [2024-11-05 17:52:03.338340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:43.402 [2024-11-05 17:52:03.338352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.675 ms 00:17:43.402 [2024-11-05 17:52:03.338359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.402 [2024-11-05 17:52:03.340316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.402 [2024-11-05 17:52:03.340352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:43.402 [2024-11-05 17:52:03.340364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:17:43.402 [2024-11-05 17:52:03.340373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.402 [2024-11-05 17:52:03.342334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.402 [2024-11-05 17:52:03.342370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:43.402 [2024-11-05 17:52:03.342382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.882 ms 00:17:43.402 [2024-11-05 17:52:03.342390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.402 [2024-11-05 17:52:03.342428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:43.402 [2024-11-05 17:52:03.342444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.342995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.343004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.343012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.343021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.343028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.343038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.343045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.343054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:43.402 [2024-11-05 17:52:03.343077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:43.403 [2024-11-05 17:52:03.343418] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:43.403 [2024-11-05 17:52:03.343428] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01e86416-23e3-4012-8803-d38bc6f06433 00:17:43.403 [2024-11-05 17:52:03.343438] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:43.403 [2024-11-05 17:52:03.343447] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:43.403 [2024-11-05 17:52:03.343455] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:43.403 [2024-11-05 17:52:03.343464] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:43.403 [2024-11-05 17:52:03.343471] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:43.403 [2024-11-05 17:52:03.343483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:43.403 [2024-11-05 17:52:03.343490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:43.403 [2024-11-05 17:52:03.343511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:43.403 [2024-11-05 17:52:03.343518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:43.403 [2024-11-05 17:52:03.343526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.403 [2024-11-05 17:52:03.343533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:43.403 [2024-11-05 17:52:03.343545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:17:43.403 [2024-11-05 17:52:03.343553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.403 [2024-11-05 17:52:03.345345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.403 [2024-11-05 17:52:03.345374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:43.403 [2024-11-05 17:52:03.345387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.770 ms 00:17:43.403 [2024-11-05 17:52:03.345395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.403 [2024-11-05 17:52:03.345525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.403 [2024-11-05 17:52:03.345535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:43.403 [2024-11-05 17:52:03.345548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:43.403 [2024-11-05 17:52:03.345557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.403 [2024-11-05 17:52:03.352130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.403 [2024-11-05 17:52:03.352169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.403 [2024-11-05 17:52:03.352182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.403 [2024-11-05 17:52:03.352190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.403 [2024-11-05 17:52:03.352267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.403 [2024-11-05 17:52:03.352276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.403 [2024-11-05 17:52:03.352288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.403 [2024-11-05 17:52:03.352296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.403 [2024-11-05 17:52:03.352344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.403 [2024-11-05 17:52:03.352353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.403 [2024-11-05 17:52:03.352363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.403 [2024-11-05 17:52:03.352373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.403 [2024-11-05 17:52:03.352397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.403 [2024-11-05 17:52:03.352406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.403 [2024-11-05 17:52:03.352416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.403 [2024-11-05 17:52:03.352423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.403 [2024-11-05 17:52:03.363923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.403 [2024-11-05 17:52:03.363972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.403 [2024-11-05 17:52:03.363985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.403 [2024-11-05 17:52:03.363993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.403 [2024-11-05 17:52:03.372670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.403 [2024-11-05 17:52:03.372718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.403 [2024-11-05 17:52:03.372734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.403 [2024-11-05 17:52:03.372742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.403 [2024-11-05 17:52:03.372796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.403 [2024-11-05 17:52:03.372806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.403 [2024-11-05 17:52:03.372822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.403 [2024-11-05 17:52:03.372829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.403 [2024-11-05 17:52:03.372865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.403 [2024-11-05 17:52:03.372874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.403 [2024-11-05 17:52:03.372884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.403 [2024-11-05 17:52:03.372892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.403 [2024-11-05 17:52:03.372964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.403 [2024-11-05 17:52:03.372976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.403 [2024-11-05 17:52:03.372987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.403 [2024-11-05 17:52:03.372994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.404 [2024-11-05 17:52:03.373028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.404 [2024-11-05 17:52:03.373038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:43.404 [2024-11-05 17:52:03.373049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.404 [2024-11-05 17:52:03.373057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.404 [2024-11-05 17:52:03.373121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.404 [2024-11-05 17:52:03.373130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.404 [2024-11-05 17:52:03.373143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.404 [2024-11-05 17:52:03.373150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.404 [2024-11-05 17:52:03.373199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.404 [2024-11-05 17:52:03.373210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.404 [2024-11-05 17:52:03.373224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.404 [2024-11-05 17:52:03.373232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.404 [2024-11-05 17:52:03.373374] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.859 ms, result 0 00:17:43.664 17:52:03 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:43.664 17:52:03 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:43.664 [2024-11-05 17:52:03.637599] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:17:43.664 [2024-11-05 17:52:03.637738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86558 ] 00:17:43.924 [2024-11-05 17:52:03.769313] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:43.924 [2024-11-05 17:52:03.801240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.924 [2024-11-05 17:52:03.824567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.186 [2024-11-05 17:52:03.924848] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.186 [2024-11-05 17:52:03.924920] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.186 [2024-11-05 17:52:04.084489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-05 17:52:04.084552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.186 [2024-11-05 17:52:04.084571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:44.186 [2024-11-05 17:52:04.084580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-05 17:52:04.087038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-05 17:52:04.087097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.186 [2024-11-05 17:52:04.087108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.436 ms 00:17:44.186 [2024-11-05 17:52:04.087116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-05 17:52:04.087209] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.186 [2024-11-05 17:52:04.087554] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.186 [2024-11-05 17:52:04.087601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-05 17:52:04.087611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.186 [2024-11-05 17:52:04.087621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:17:44.186 [2024-11-05 17:52:04.087630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-05 17:52:04.089147] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:44.186 [2024-11-05 17:52:04.092481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-05 17:52:04.092531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:44.186 [2024-11-05 17:52:04.092543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:17:44.186 [2024-11-05 17:52:04.092551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-05 17:52:04.092628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-05 17:52:04.092640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:44.186 [2024-11-05 17:52:04.092649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:44.186 [2024-11-05 17:52:04.092657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-05 17:52:04.098909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-05 17:52:04.098944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.186 [2024-11-05 17:52:04.098955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.208 ms 00:17:44.186 [2024-11-05 17:52:04.098972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-05 17:52:04.099107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-05 17:52:04.099120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.186 [2024-11-05 17:52:04.099129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:44.186 [2024-11-05 17:52:04.099142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-05 17:52:04.099183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-05 17:52:04.099193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.186 [2024-11-05 17:52:04.099201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:44.186 [2024-11-05 17:52:04.099209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-05 17:52:04.099229] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:44.186 [2024-11-05 17:52:04.100840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-05 17:52:04.100876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.186 [2024-11-05 17:52:04.100887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:17:44.186 [2024-11-05 17:52:04.100902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-05 17:52:04.100941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.186 [2024-11-05 17:52:04.100954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.186 [2024-11-05 17:52:04.100965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:44.186 [2024-11-05 17:52:04.100974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.186 [2024-11-05 17:52:04.100993] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:44.186 [2024-11-05 17:52:04.101012] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:44.186 [2024-11-05 17:52:04.101048] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:44.186 [2024-11-05 17:52:04.101086] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:44.186 [2024-11-05 17:52:04.101192] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.186 [2024-11-05 17:52:04.101204] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.186 [2024-11-05 17:52:04.101215] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:44.186 [2024-11-05 17:52:04.101225] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.186 [2024-11-05 17:52:04.101235] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.186 [2024-11-05 17:52:04.101243] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:44.187 [2024-11-05 17:52:04.101251] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.187 [2024-11-05 17:52:04.101260] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.187 [2024-11-05 17:52:04.101269] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.187 [2024-11-05 17:52:04.101278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.187 [2024-11-05 17:52:04.101286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.187 [2024-11-05 17:52:04.101294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:17:44.187 [2024-11-05 17:52:04.101301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.187 [2024-11-05 17:52:04.101390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.187 [2024-11-05 17:52:04.101399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.187 [2024-11-05 17:52:04.101407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:44.187 [2024-11-05 17:52:04.101414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.187 [2024-11-05 17:52:04.101512] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.187 [2024-11-05 17:52:04.101524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.187 [2024-11-05 17:52:04.101533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.187 [2024-11-05 17:52:04.101540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.187 [2024-11-05 17:52:04.101557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:44.187 [2024-11-05 17:52:04.101579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.187 [2024-11-05 17:52:04.101586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.187 [2024-11-05 17:52:04.101599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.187 [2024-11-05 17:52:04.101605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:44.187 [2024-11-05 17:52:04.101612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.187 [2024-11-05 17:52:04.101618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.187 [2024-11-05 17:52:04.101626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:44.187 [2024-11-05 17:52:04.101633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.187 [2024-11-05 17:52:04.101645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:44.187 [2024-11-05 17:52:04.101652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.187 [2024-11-05 17:52:04.101664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.187 [2024-11-05 17:52:04.101682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.187 [2024-11-05 17:52:04.101690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.187 [2024-11-05 17:52:04.101704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.187 [2024-11-05 17:52:04.101710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.187 [2024-11-05 17:52:04.101723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.187 [2024-11-05 17:52:04.101730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.187 [2024-11-05 17:52:04.101744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.187 [2024-11-05 17:52:04.101750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.187 [2024-11-05 17:52:04.101763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.187 [2024-11-05 17:52:04.101770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:44.187 [2024-11-05 17:52:04.101776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.187 [2024-11-05 17:52:04.101783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.187 [2024-11-05 17:52:04.101792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:44.187 [2024-11-05 17:52:04.101798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.187 [2024-11-05 17:52:04.101813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:44.187 [2024-11-05 17:52:04.101819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101826] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.187 [2024-11-05 17:52:04.101833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.187 [2024-11-05 17:52:04.101840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.187 [2024-11-05 17:52:04.101848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.187 [2024-11-05 17:52:04.101860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.187 [2024-11-05 17:52:04.101866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.187 [2024-11-05 17:52:04.101872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.187 [2024-11-05 17:52:04.101880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.187 [2024-11-05 17:52:04.101886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.187 [2024-11-05 17:52:04.101893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.187 [2024-11-05 17:52:04.101901] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.187 [2024-11-05 17:52:04.101912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.187 [2024-11-05 17:52:04.101922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:44.187 [2024-11-05 17:52:04.101929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:44.187 [2024-11-05 17:52:04.101936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:44.187 [2024-11-05 17:52:04.101953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:44.187 [2024-11-05 17:52:04.101960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:44.187 [2024-11-05 17:52:04.101967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:44.187 [2024-11-05 17:52:04.101975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:44.187 [2024-11-05 17:52:04.101982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:44.187 [2024-11-05 17:52:04.101989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:44.187 [2024-11-05 17:52:04.101997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:44.187 [2024-11-05 17:52:04.102004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:44.187 [2024-11-05 17:52:04.102011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:44.187 [2024-11-05 17:52:04.102018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:44.187 [2024-11-05 17:52:04.102026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:44.187 [2024-11-05 17:52:04.102033] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.187 [2024-11-05 17:52:04.102045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.187 [2024-11-05 17:52:04.102055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.187 [2024-11-05 17:52:04.102074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.187 [2024-11-05 17:52:04.102082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.187 [2024-11-05 17:52:04.102089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.187 [2024-11-05 17:52:04.102097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.187 [2024-11-05 17:52:04.102104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.187 [2024-11-05 17:52:04.102112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:17:44.187 [2024-11-05 17:52:04.102122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.187 [2024-11-05 17:52:04.113312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.187 [2024-11-05 17:52:04.113354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.187 [2024-11-05 17:52:04.113367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.137 ms 00:17:44.187 [2024-11-05 17:52:04.113377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.187 [2024-11-05 17:52:04.113505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.187 [2024-11-05 17:52:04.113531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.187 [2024-11-05 17:52:04.113542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:44.187 [2024-11-05 17:52:04.113551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.187 [2024-11-05 17:52:04.131429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.188 [2024-11-05 17:52:04.131489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.188 [2024-11-05 17:52:04.131507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.852 ms 00:17:44.188 [2024-11-05 17:52:04.131524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.188 [2024-11-05 17:52:04.131636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.188 [2024-11-05 17:52:04.131653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.188 [2024-11-05 17:52:04.131665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.188 [2024-11-05 17:52:04.131676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.188 [2024-11-05 17:52:04.132148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.188 [2024-11-05 17:52:04.132187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.188 [2024-11-05 17:52:04.132201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:17:44.188 [2024-11-05 17:52:04.132212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.188 [2024-11-05 17:52:04.132397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.188 [2024-11-05 17:52:04.132414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.188 [2024-11-05 17:52:04.132426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:17:44.188 [2024-11-05 17:52:04.132436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.188 [2024-11-05 17:52:04.139828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.188 [2024-11-05 17:52:04.139878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.188 [2024-11-05 17:52:04.139892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.363 ms 00:17:44.188 [2024-11-05 17:52:04.139903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.188 [2024-11-05 17:52:04.143844] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:44.188 [2024-11-05 17:52:04.143887] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:44.188 [2024-11-05 17:52:04.143904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.188 [2024-11-05 17:52:04.143912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:44.188 [2024-11-05 17:52:04.143921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:17:44.188 [2024-11-05 17:52:04.143929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.188 [2024-11-05 17:52:04.159323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.188 [2024-11-05 17:52:04.159366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:44.188 [2024-11-05 17:52:04.159378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.325 ms 00:17:44.188 [2024-11-05 17:52:04.159388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.188 [2024-11-05 17:52:04.161619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.188 [2024-11-05 17:52:04.161659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:44.188 [2024-11-05 17:52:04.161669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:17:44.188 [2024-11-05 17:52:04.161676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.188 [2024-11-05 17:52:04.164158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.188 [2024-11-05 17:52:04.164204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:44.188 [2024-11-05 17:52:04.164215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.435 ms 00:17:44.188 [2024-11-05 17:52:04.164224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.188 [2024-11-05 17:52:04.164598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.188 [2024-11-05 17:52:04.164625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.188 [2024-11-05 17:52:04.164640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:17:44.188 [2024-11-05 17:52:04.164648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-05 17:52:04.187536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.449 [2024-11-05 17:52:04.187601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.449 [2024-11-05 17:52:04.187615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.861 ms 00:17:44.449 [2024-11-05 17:52:04.187624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-05 17:52:04.196736] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:44.449 [2024-11-05 17:52:04.214421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.449 [2024-11-05 17:52:04.214480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.449 [2024-11-05 17:52:04.214503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.681 ms 00:17:44.449 [2024-11-05 17:52:04.214512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-05 17:52:04.214615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.449 [2024-11-05 17:52:04.214628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.449 [2024-11-05 17:52:04.214643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:44.449 [2024-11-05 17:52:04.214650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.449 [2024-11-05 17:52:04.214704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.449 [2024-11-05 17:52:04.214713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.449 [2024-11-05 17:52:04.214722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:44.450 [2024-11-05 17:52:04.214730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.450 [2024-11-05 17:52:04.214753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.450 [2024-11-05 17:52:04.214762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.450 [2024-11-05 17:52:04.214770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.450 [2024-11-05 17:52:04.214781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.450 [2024-11-05 17:52:04.214841] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.450 [2024-11-05 17:52:04.214880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.450 [2024-11-05 17:52:04.214900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.450 [2024-11-05 17:52:04.214913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:44.450 [2024-11-05 17:52:04.214927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.450 [2024-11-05 17:52:04.220368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.450 [2024-11-05 17:52:04.220418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.450 [2024-11-05 17:52:04.220432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.405 ms 00:17:44.450 [2024-11-05 17:52:04.220445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.450 [2024-11-05 17:52:04.220558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.450 [2024-11-05 17:52:04.220572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.450 [2024-11-05 17:52:04.220582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:44.450 [2024-11-05 17:52:04.220590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.450 [2024-11-05 17:52:04.222275] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.450 [2024-11-05 17:52:04.223897] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 137.464 ms, result 0 00:17:44.450 [2024-11-05 17:52:04.225344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.450 [2024-11-05 17:52:04.232579] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:45.393  [2024-11-05T17:52:06.350Z] Copying: 14/256 [MB] (14 MBps) [2024-11-05T17:52:07.292Z] Copying: 29/256 [MB] (15 MBps) [2024-11-05T17:52:08.239Z] Copying: 52/256 [MB] (22 MBps) [2024-11-05T17:52:09.627Z] Copying: 75/256 [MB] (22 MBps) [2024-11-05T17:52:10.583Z] Copying: 95/256 [MB] (19 MBps) [2024-11-05T17:52:11.528Z] Copying: 107/256 [MB] (12 MBps) [2024-11-05T17:52:12.471Z] Copying: 124/256 [MB] (17 MBps) [2024-11-05T17:52:13.413Z] Copying: 134/256 [MB] (10 MBps) [2024-11-05T17:52:14.356Z] Copying: 147960/262144 [kB] (10024 kBps) [2024-11-05T17:52:15.328Z] Copying: 158140/262144 [kB] (10180 kBps) [2024-11-05T17:52:16.271Z] Copying: 167984/262144 [kB] (9844 kBps) [2024-11-05T17:52:17.660Z] Copying: 174/256 [MB] (10 MBps) [2024-11-05T17:52:18.233Z] Copying: 188240/262144 [kB] (9720 kBps) [2024-11-05T17:52:19.653Z] Copying: 194/256 [MB] (10 MBps) [2024-11-05T17:52:20.597Z] Copying: 207068/262144 [kB] (8364 kBps) [2024-11-05T17:52:21.541Z] Copying: 216224/262144 [kB] (9156 kBps) [2024-11-05T17:52:22.484Z] Copying: 225440/262144 [kB] (9216 kBps) [2024-11-05T17:52:23.434Z] Copying: 234568/262144 [kB] (9128 kBps) [2024-11-05T17:52:24.375Z] Copying: 243760/262144 [kB] (9192 kBps) [2024-11-05T17:52:25.321Z] Copying: 253556/262144 [kB] (9796 kBps) [2024-11-05T17:52:25.321Z] Copying: 262072/262144 [kB] (8516 kBps) [2024-11-05T17:52:25.321Z] Copying: 256/256 [MB] (average 12 MBps)[2024-11-05 17:52:25.242199] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:05.330 [2024-11-05 17:52:25.244849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.244915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:05.330 [2024-11-05 17:52:25.244934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:05.330 [2024-11-05 17:52:25.244944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.330 [2024-11-05 17:52:25.244969] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:05.330 [2024-11-05 17:52:25.245961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.246009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:05.330 [2024-11-05 17:52:25.246032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:18:05.330 [2024-11-05 17:52:25.246042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.330 [2024-11-05 17:52:25.246347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.246360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:05.330 [2024-11-05 17:52:25.246380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:18:05.330 [2024-11-05 17:52:25.246389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.330 [2024-11-05 17:52:25.250157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.250184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:05.330 [2024-11-05 17:52:25.250196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.751 ms 00:18:05.330 [2024-11-05 17:52:25.250206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.330 [2024-11-05 17:52:25.257340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.257390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:05.330 [2024-11-05 17:52:25.257404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.114 ms 00:18:05.330 [2024-11-05 17:52:25.257422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.330 [2024-11-05 17:52:25.261298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.261357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:05.330 [2024-11-05 17:52:25.261384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.795 ms 00:18:05.330 [2024-11-05 17:52:25.261394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.330 [2024-11-05 17:52:25.267545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.267602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:05.330 [2024-11-05 17:52:25.267616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.097 ms 00:18:05.330 [2024-11-05 17:52:25.267625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.330 [2024-11-05 17:52:25.267793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.267809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:05.330 [2024-11-05 17:52:25.267830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:05.330 [2024-11-05 17:52:25.267843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.330 [2024-11-05 17:52:25.271714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.271769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:05.330 [2024-11-05 17:52:25.271781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.850 ms 00:18:05.330 [2024-11-05 17:52:25.271790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.330 [2024-11-05 17:52:25.275053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.275115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:05.330 [2024-11-05 17:52:25.275126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:18:05.330 [2024-11-05 17:52:25.275135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.330 [2024-11-05 17:52:25.277687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.277736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:05.330 [2024-11-05 17:52:25.277748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.505 ms 00:18:05.330 [2024-11-05 17:52:25.277757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.330 [2024-11-05 17:52:25.280459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.330 [2024-11-05 17:52:25.280540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:05.330 [2024-11-05 17:52:25.280552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:18:05.330 [2024-11-05 17:52:25.280559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.331 [2024-11-05 17:52:25.280609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:05.331 [2024-11-05 17:52:25.280632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.280992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:05.331 [2024-11-05 17:52:25.281347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:05.332 [2024-11-05 17:52:25.281512] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:05.332 [2024-11-05 17:52:25.281541] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01e86416-23e3-4012-8803-d38bc6f06433 00:18:05.332 [2024-11-05 17:52:25.281552] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:05.332 [2024-11-05 17:52:25.281564] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:05.332 [2024-11-05 17:52:25.281573] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:05.332 [2024-11-05 17:52:25.281584] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:05.332 [2024-11-05 17:52:25.281593] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:05.332 [2024-11-05 17:52:25.281603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:05.332 [2024-11-05 17:52:25.281615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:05.332 [2024-11-05 17:52:25.281622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:05.332 [2024-11-05 17:52:25.281628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:05.332 [2024-11-05 17:52:25.281637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.332 [2024-11-05 17:52:25.281646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:05.332 [2024-11-05 17:52:25.281656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:18:05.332 [2024-11-05 17:52:25.281663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.332 [2024-11-05 17:52:25.284931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.332 [2024-11-05 17:52:25.284972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:05.332 [2024-11-05 17:52:25.284993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.249 ms 00:18:05.332 [2024-11-05 17:52:25.285004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.332 [2024-11-05 17:52:25.285188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.332 [2024-11-05 17:52:25.285200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:05.332 [2024-11-05 17:52:25.285211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:18:05.332 [2024-11-05 17:52:25.285219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.332 [2024-11-05 17:52:25.296610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.332 [2024-11-05 17:52:25.296687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:05.332 [2024-11-05 17:52:25.296700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.332 [2024-11-05 17:52:25.296716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.332 [2024-11-05 17:52:25.296841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.332 [2024-11-05 17:52:25.296854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:05.332 [2024-11-05 17:52:25.296868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.332 [2024-11-05 17:52:25.296878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.332 [2024-11-05 17:52:25.296938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.332 [2024-11-05 17:52:25.296949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:05.332 [2024-11-05 17:52:25.296959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.332 [2024-11-05 17:52:25.296967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.332 [2024-11-05 17:52:25.296993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.332 [2024-11-05 17:52:25.297003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:05.332 [2024-11-05 17:52:25.297016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.332 [2024-11-05 17:52:25.297025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.332 [2024-11-05 17:52:25.318469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.332 [2024-11-05 17:52:25.318575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:05.332 [2024-11-05 17:52:25.318592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.332 [2024-11-05 17:52:25.318615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.594 [2024-11-05 17:52:25.334256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.594 [2024-11-05 17:52:25.334351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:05.594 [2024-11-05 17:52:25.334366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.594 [2024-11-05 17:52:25.334377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.594 [2024-11-05 17:52:25.334482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.594 [2024-11-05 17:52:25.334494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:05.594 [2024-11-05 17:52:25.334504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.594 [2024-11-05 17:52:25.334515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.594 [2024-11-05 17:52:25.334551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.594 [2024-11-05 17:52:25.334570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:05.594 [2024-11-05 17:52:25.334581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.594 [2024-11-05 17:52:25.334589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.594 [2024-11-05 17:52:25.334684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.594 [2024-11-05 17:52:25.334696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:05.594 [2024-11-05 17:52:25.334705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.594 [2024-11-05 17:52:25.334715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.594 [2024-11-05 17:52:25.334751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.594 [2024-11-05 17:52:25.334762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:05.594 [2024-11-05 17:52:25.334778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.594 [2024-11-05 17:52:25.334787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.594 [2024-11-05 17:52:25.334856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.594 [2024-11-05 17:52:25.334866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:05.594 [2024-11-05 17:52:25.334875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.594 [2024-11-05 17:52:25.334884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.594 [2024-11-05 17:52:25.334945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.595 [2024-11-05 17:52:25.334961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:05.595 [2024-11-05 17:52:25.334971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.595 [2024-11-05 17:52:25.334980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.595 [2024-11-05 17:52:25.335216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 90.356 ms, result 0 00:18:05.856 00:18:05.856 00:18:05.856 17:52:25 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:05.856 17:52:25 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:06.429 17:52:26 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:06.429 [2024-11-05 17:52:26.314137] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:18:06.429 [2024-11-05 17:52:26.314370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86798 ] 00:18:06.690 [2024-11-05 17:52:26.451295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:06.690 [2024-11-05 17:52:26.482487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.690 [2024-11-05 17:52:26.524247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.690 [2024-11-05 17:52:26.673776] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:06.690 [2024-11-05 17:52:26.673897] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:06.952 [2024-11-05 17:52:26.838783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.838920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:06.952 [2024-11-05 17:52:26.838940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:06.952 [2024-11-05 17:52:26.838950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.841748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.841802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.952 [2024-11-05 17:52:26.841815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.771 ms 00:18:06.952 [2024-11-05 17:52:26.841824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.841928] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:06.952 [2024-11-05 17:52:26.842360] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:06.952 [2024-11-05 17:52:26.842413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.842423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.952 [2024-11-05 17:52:26.842433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:18:06.952 [2024-11-05 17:52:26.842442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.844898] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:06.952 [2024-11-05 17:52:26.849888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.849941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:06.952 [2024-11-05 17:52:26.849953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.987 ms 00:18:06.952 [2024-11-05 17:52:26.849962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.850056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.850093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:06.952 [2024-11-05 17:52:26.850105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:06.952 [2024-11-05 17:52:26.850113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.861571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.861622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.952 [2024-11-05 17:52:26.861642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.403 ms 00:18:06.952 [2024-11-05 17:52:26.861652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.861823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.861836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.952 [2024-11-05 17:52:26.861847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:06.952 [2024-11-05 17:52:26.861857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.861895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.861905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:06.952 [2024-11-05 17:52:26.861915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:06.952 [2024-11-05 17:52:26.861923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.861951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:06.952 [2024-11-05 17:52:26.864844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.864911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.952 [2024-11-05 17:52:26.864928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.901 ms 00:18:06.952 [2024-11-05 17:52:26.864948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.865007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.865021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:06.952 [2024-11-05 17:52:26.865031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:06.952 [2024-11-05 17:52:26.865041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.865081] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:06.952 [2024-11-05 17:52:26.865109] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:06.952 [2024-11-05 17:52:26.865152] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:06.952 [2024-11-05 17:52:26.865176] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:06.952 [2024-11-05 17:52:26.865291] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:06.952 [2024-11-05 17:52:26.865303] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:06.952 [2024-11-05 17:52:26.865315] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:06.952 [2024-11-05 17:52:26.865327] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:06.952 [2024-11-05 17:52:26.865339] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:06.952 [2024-11-05 17:52:26.865352] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:06.952 [2024-11-05 17:52:26.865361] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:06.952 [2024-11-05 17:52:26.865372] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:06.952 [2024-11-05 17:52:26.865381] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:06.952 [2024-11-05 17:52:26.865392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.865400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:06.952 [2024-11-05 17:52:26.865408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:18:06.952 [2024-11-05 17:52:26.865416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.865508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.952 [2024-11-05 17:52:26.865519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:06.952 [2024-11-05 17:52:26.865533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:06.952 [2024-11-05 17:52:26.865541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.952 [2024-11-05 17:52:26.865647] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:06.952 [2024-11-05 17:52:26.865672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:06.952 [2024-11-05 17:52:26.865683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.952 [2024-11-05 17:52:26.865693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.952 [2024-11-05 17:52:26.865703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:06.952 [2024-11-05 17:52:26.865715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:06.952 [2024-11-05 17:52:26.865735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:06.952 [2024-11-05 17:52:26.865744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:06.952 [2024-11-05 17:52:26.865752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:06.952 [2024-11-05 17:52:26.865760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.952 [2024-11-05 17:52:26.865770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:06.952 [2024-11-05 17:52:26.865778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:06.952 [2024-11-05 17:52:26.865786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.952 [2024-11-05 17:52:26.865795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:06.952 [2024-11-05 17:52:26.865804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:06.952 [2024-11-05 17:52:26.865814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.952 [2024-11-05 17:52:26.865826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:06.952 [2024-11-05 17:52:26.865840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:06.952 [2024-11-05 17:52:26.865853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.952 [2024-11-05 17:52:26.865866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:06.952 [2024-11-05 17:52:26.865876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:06.952 [2024-11-05 17:52:26.865893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.953 [2024-11-05 17:52:26.865907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:06.953 [2024-11-05 17:52:26.865919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:06.953 [2024-11-05 17:52:26.865927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.953 [2024-11-05 17:52:26.865936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:06.953 [2024-11-05 17:52:26.865945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:06.953 [2024-11-05 17:52:26.865955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.953 [2024-11-05 17:52:26.865965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:06.953 [2024-11-05 17:52:26.865973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:06.953 [2024-11-05 17:52:26.865981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.953 [2024-11-05 17:52:26.865989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:06.953 [2024-11-05 17:52:26.865997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:06.953 [2024-11-05 17:52:26.866004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.953 [2024-11-05 17:52:26.866011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:06.953 [2024-11-05 17:52:26.866018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:06.953 [2024-11-05 17:52:26.866025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.953 [2024-11-05 17:52:26.866035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:06.953 [2024-11-05 17:52:26.866042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:06.953 [2024-11-05 17:52:26.866049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.953 [2024-11-05 17:52:26.866056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:06.953 [2024-11-05 17:52:26.866078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:06.953 [2024-11-05 17:52:26.866086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.953 [2024-11-05 17:52:26.866093] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:06.953 [2024-11-05 17:52:26.866101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:06.953 [2024-11-05 17:52:26.866109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.953 [2024-11-05 17:52:26.866117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.953 [2024-11-05 17:52:26.866124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:06.953 [2024-11-05 17:52:26.866131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:06.953 [2024-11-05 17:52:26.866137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:06.953 [2024-11-05 17:52:26.866145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:06.953 [2024-11-05 17:52:26.866151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:06.953 [2024-11-05 17:52:26.866158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:06.953 [2024-11-05 17:52:26.866169] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:06.953 [2024-11-05 17:52:26.866184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.953 [2024-11-05 17:52:26.866195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:06.953 [2024-11-05 17:52:26.866204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:06.953 [2024-11-05 17:52:26.866213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:06.953 [2024-11-05 17:52:26.866222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:06.953 [2024-11-05 17:52:26.866231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:06.953 [2024-11-05 17:52:26.866238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:06.953 [2024-11-05 17:52:26.866247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:06.953 [2024-11-05 17:52:26.866254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:06.953 [2024-11-05 17:52:26.866263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:06.953 [2024-11-05 17:52:26.866271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:06.953 [2024-11-05 17:52:26.866279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:06.953 [2024-11-05 17:52:26.866287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:06.953 [2024-11-05 17:52:26.866295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:06.953 [2024-11-05 17:52:26.866303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:06.953 [2024-11-05 17:52:26.866316] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:06.953 [2024-11-05 17:52:26.866328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.953 [2024-11-05 17:52:26.866339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:06.953 [2024-11-05 17:52:26.866347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:06.953 [2024-11-05 17:52:26.866355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:06.953 [2024-11-05 17:52:26.866363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:06.953 [2024-11-05 17:52:26.866373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.953 [2024-11-05 17:52:26.866381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:06.953 [2024-11-05 17:52:26.866390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:18:06.953 [2024-11-05 17:52:26.866403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.953 [2024-11-05 17:52:26.886790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.953 [2024-11-05 17:52:26.886858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.953 [2024-11-05 17:52:26.886875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.306 ms 00:18:06.953 [2024-11-05 17:52:26.886893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.953 [2024-11-05 17:52:26.887092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.953 [2024-11-05 17:52:26.887107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:06.953 [2024-11-05 17:52:26.887120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:06.953 [2024-11-05 17:52:26.887130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.953 [2024-11-05 17:52:26.915768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.953 [2024-11-05 17:52:26.915870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.953 [2024-11-05 17:52:26.915914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.603 ms 00:18:06.953 [2024-11-05 17:52:26.915949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.953 [2024-11-05 17:52:26.916147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.953 [2024-11-05 17:52:26.916172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.953 [2024-11-05 17:52:26.916205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:06.953 [2024-11-05 17:52:26.916225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.953 [2024-11-05 17:52:26.916990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.953 [2024-11-05 17:52:26.917044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.953 [2024-11-05 17:52:26.917095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:18:06.953 [2024-11-05 17:52:26.917113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.953 [2024-11-05 17:52:26.917361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.953 [2024-11-05 17:52:26.917380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.953 [2024-11-05 17:52:26.917403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:18:06.953 [2024-11-05 17:52:26.917417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.953 [2024-11-05 17:52:26.929263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.953 [2024-11-05 17:52:26.929324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.953 [2024-11-05 17:52:26.929337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.802 ms 00:18:06.953 [2024-11-05 17:52:26.929346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.953 [2024-11-05 17:52:26.934186] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:06.953 [2024-11-05 17:52:26.934240] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:06.953 [2024-11-05 17:52:26.934256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.953 [2024-11-05 17:52:26.934267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:06.953 [2024-11-05 17:52:26.934277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.746 ms 00:18:06.953 [2024-11-05 17:52:26.934286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.225 [2024-11-05 17:52:26.950532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.225 [2024-11-05 17:52:26.950623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:07.225 [2024-11-05 17:52:26.950638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.156 ms 00:18:07.225 [2024-11-05 17:52:26.950647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.225 [2024-11-05 17:52:26.953971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.225 [2024-11-05 17:52:26.954019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:07.225 [2024-11-05 17:52:26.954031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.212 ms 00:18:07.225 [2024-11-05 17:52:26.954040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.225 [2024-11-05 17:52:26.956786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.225 [2024-11-05 17:52:26.956832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:07.225 [2024-11-05 17:52:26.956843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:18:07.225 [2024-11-05 17:52:26.956851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.225 [2024-11-05 17:52:26.957304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.225 [2024-11-05 17:52:26.957412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:07.225 [2024-11-05 17:52:26.957424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:18:07.225 [2024-11-05 17:52:26.957433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.225 [2024-11-05 17:52:26.988194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.225 [2024-11-05 17:52:26.988295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:07.225 [2024-11-05 17:52:26.988314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.728 ms 00:18:07.225 [2024-11-05 17:52:26.988332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.225 [2024-11-05 17:52:26.997628] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:07.225 [2024-11-05 17:52:27.023496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.225 [2024-11-05 17:52:27.023580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:07.225 [2024-11-05 17:52:27.023599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.034 ms 00:18:07.225 [2024-11-05 17:52:27.023620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.225 [2024-11-05 17:52:27.023785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.225 [2024-11-05 17:52:27.023805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:07.225 [2024-11-05 17:52:27.023821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:07.225 [2024-11-05 17:52:27.023831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.225 [2024-11-05 17:52:27.023909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.225 [2024-11-05 17:52:27.023929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:07.225 [2024-11-05 17:52:27.023940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:07.225 [2024-11-05 17:52:27.023952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.225 [2024-11-05 17:52:27.023986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.225 [2024-11-05 17:52:27.023997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:07.225 [2024-11-05 17:52:27.024010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:07.225 [2024-11-05 17:52:27.024019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.225 [2024-11-05 17:52:27.024098] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:07.225 [2024-11-05 17:52:27.024117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.225 [2024-11-05 17:52:27.024127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:07.225 [2024-11-05 17:52:27.024136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:07.226 [2024-11-05 17:52:27.024148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.226 [2024-11-05 17:52:27.031443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.226 [2024-11-05 17:52:27.031509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:07.226 [2024-11-05 17:52:27.031523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.264 ms 00:18:07.226 [2024-11-05 17:52:27.031541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.226 [2024-11-05 17:52:27.031649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.226 [2024-11-05 17:52:27.031662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:07.226 [2024-11-05 17:52:27.031672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:07.226 [2024-11-05 17:52:27.031681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.226 [2024-11-05 17:52:27.033494] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:07.226 [2024-11-05 17:52:27.034949] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 194.296 ms, result 0 00:18:07.226 [2024-11-05 17:52:27.036261] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:07.226 [2024-11-05 17:52:27.043670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:07.800  [2024-11-05T17:52:27.791Z] Copying: 4096/4096 [kB] (average 9122 kBps)[2024-11-05 17:52:27.493929] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:07.800 [2024-11-05 17:52:27.496003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.496082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:07.800 [2024-11-05 17:52:27.496099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:07.800 [2024-11-05 17:52:27.496108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.496134] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:07.800 [2024-11-05 17:52:27.496830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.496874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:07.800 [2024-11-05 17:52:27.496888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:18:07.800 [2024-11-05 17:52:27.496898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.500781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.500838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:07.800 [2024-11-05 17:52:27.500851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.854 ms 00:18:07.800 [2024-11-05 17:52:27.500860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.505397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.505433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:07.800 [2024-11-05 17:52:27.505445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.518 ms 00:18:07.800 [2024-11-05 17:52:27.505454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.512500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.512543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:07.800 [2024-11-05 17:52:27.512562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.015 ms 00:18:07.800 [2024-11-05 17:52:27.512571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.516137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.516185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:07.800 [2024-11-05 17:52:27.516209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.488 ms 00:18:07.800 [2024-11-05 17:52:27.516217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.521638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.521690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:07.800 [2024-11-05 17:52:27.521703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.373 ms 00:18:07.800 [2024-11-05 17:52:27.521713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.521847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.521860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:07.800 [2024-11-05 17:52:27.521878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:07.800 [2024-11-05 17:52:27.521888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.525679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.525729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:07.800 [2024-11-05 17:52:27.525740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.771 ms 00:18:07.800 [2024-11-05 17:52:27.525750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.529079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.529124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:07.800 [2024-11-05 17:52:27.529134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.268 ms 00:18:07.800 [2024-11-05 17:52:27.529142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.531490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.531537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:07.800 [2024-11-05 17:52:27.531547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.303 ms 00:18:07.800 [2024-11-05 17:52:27.531555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.534232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.800 [2024-11-05 17:52:27.534282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:07.800 [2024-11-05 17:52:27.534293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:18:07.800 [2024-11-05 17:52:27.534302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.800 [2024-11-05 17:52:27.534346] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:07.800 [2024-11-05 17:52:27.534365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:07.800 [2024-11-05 17:52:27.534636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.534995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:07.801 [2024-11-05 17:52:27.535373] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:07.801 [2024-11-05 17:52:27.535398] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01e86416-23e3-4012-8803-d38bc6f06433 00:18:07.801 [2024-11-05 17:52:27.535408] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:07.801 [2024-11-05 17:52:27.535418] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:07.801 [2024-11-05 17:52:27.535429] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:07.801 [2024-11-05 17:52:27.535440] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:07.801 [2024-11-05 17:52:27.535449] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:07.801 [2024-11-05 17:52:27.535462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:07.801 [2024-11-05 17:52:27.535473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:07.801 [2024-11-05 17:52:27.535481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:07.801 [2024-11-05 17:52:27.535489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:07.801 [2024-11-05 17:52:27.535497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.801 [2024-11-05 17:52:27.535508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:07.801 [2024-11-05 17:52:27.535518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:18:07.801 [2024-11-05 17:52:27.535528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.801 [2024-11-05 17:52:27.538013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.801 [2024-11-05 17:52:27.538056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:07.801 [2024-11-05 17:52:27.538084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:18:07.801 [2024-11-05 17:52:27.538095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.801 [2024-11-05 17:52:27.538222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.801 [2024-11-05 17:52:27.538233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:07.801 [2024-11-05 17:52:27.538243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:07.801 [2024-11-05 17:52:27.538251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.801 [2024-11-05 17:52:27.546837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.546889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:07.802 [2024-11-05 17:52:27.546907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.546915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.547005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.547014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:07.802 [2024-11-05 17:52:27.547023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.547032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.547100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.547112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:07.802 [2024-11-05 17:52:27.547122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.547135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.547155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.547163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:07.802 [2024-11-05 17:52:27.547172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.547180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.563587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.563659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:07.802 [2024-11-05 17:52:27.563682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.563691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.575748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.575821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:07.802 [2024-11-05 17:52:27.575835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.575844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.575913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.575923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:07.802 [2024-11-05 17:52:27.575932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.575941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.575984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.575994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:07.802 [2024-11-05 17:52:27.576005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.576020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.576123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.576137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:07.802 [2024-11-05 17:52:27.576146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.576155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.576191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.576205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:07.802 [2024-11-05 17:52:27.576221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.576235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.576280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.576292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:07.802 [2024-11-05 17:52:27.576301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.576309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.576364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.802 [2024-11-05 17:52:27.576378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:07.802 [2024-11-05 17:52:27.576389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.802 [2024-11-05 17:52:27.576398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.802 [2024-11-05 17:52:27.576561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 80.558 ms, result 0 00:18:08.063 00:18:08.063 00:18:08.063 17:52:27 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=86812 00:18:08.063 17:52:27 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 86812 00:18:08.063 17:52:27 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:08.063 17:52:27 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 86812 ']' 00:18:08.063 17:52:27 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.063 17:52:27 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:08.063 17:52:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.063 17:52:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:08.063 17:52:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:08.063 [2024-11-05 17:52:27.909296] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:18:08.063 [2024-11-05 17:52:27.909456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86812 ] 00:18:08.063 [2024-11-05 17:52:28.042976] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:08.325 [2024-11-05 17:52:28.074209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.325 [2024-11-05 17:52:28.109355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.899 17:52:28 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:08.899 17:52:28 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:18:08.900 17:52:28 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:09.161 [2024-11-05 17:52:29.040168] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:09.161 [2024-11-05 17:52:29.040272] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:09.425 [2024-11-05 17:52:29.221961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.222089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:09.425 [2024-11-05 17:52:29.222128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:09.425 [2024-11-05 17:52:29.222144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.225797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.225889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.425 [2024-11-05 17:52:29.225914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.605 ms 00:18:09.425 [2024-11-05 17:52:29.225927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.226258] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:09.425 [2024-11-05 17:52:29.226589] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:09.425 [2024-11-05 17:52:29.226634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.226645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.425 [2024-11-05 17:52:29.226657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:18:09.425 [2024-11-05 17:52:29.226666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.229099] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:09.425 [2024-11-05 17:52:29.234634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.234707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:09.425 [2024-11-05 17:52:29.234722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.549 ms 00:18:09.425 [2024-11-05 17:52:29.234734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.234857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.234875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:09.425 [2024-11-05 17:52:29.234890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:09.425 [2024-11-05 17:52:29.234900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.244981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.245047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.425 [2024-11-05 17:52:29.245062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.015 ms 00:18:09.425 [2024-11-05 17:52:29.245091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.245266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.245284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.425 [2024-11-05 17:52:29.245295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:09.425 [2024-11-05 17:52:29.245308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.245340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.245356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:09.425 [2024-11-05 17:52:29.245365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:09.425 [2024-11-05 17:52:29.245375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.245405] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:09.425 [2024-11-05 17:52:29.247818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.247865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.425 [2024-11-05 17:52:29.247883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.418 ms 00:18:09.425 [2024-11-05 17:52:29.247891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.247942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.247951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:09.425 [2024-11-05 17:52:29.247962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:09.425 [2024-11-05 17:52:29.247975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.248001] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:09.425 [2024-11-05 17:52:29.248022] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:09.425 [2024-11-05 17:52:29.248090] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:09.425 [2024-11-05 17:52:29.248118] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:09.425 [2024-11-05 17:52:29.248247] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:09.425 [2024-11-05 17:52:29.248263] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:09.425 [2024-11-05 17:52:29.248279] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:09.425 [2024-11-05 17:52:29.248292] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:09.425 [2024-11-05 17:52:29.248306] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:09.425 [2024-11-05 17:52:29.248317] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:09.425 [2024-11-05 17:52:29.248328] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:09.425 [2024-11-05 17:52:29.248336] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:09.425 [2024-11-05 17:52:29.248348] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:09.425 [2024-11-05 17:52:29.248357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.248367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:09.425 [2024-11-05 17:52:29.248375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:18:09.425 [2024-11-05 17:52:29.248386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.248479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.425 [2024-11-05 17:52:29.248501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:09.425 [2024-11-05 17:52:29.248511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:09.425 [2024-11-05 17:52:29.248525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.425 [2024-11-05 17:52:29.248645] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:09.425 [2024-11-05 17:52:29.248669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:09.425 [2024-11-05 17:52:29.248681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:09.425 [2024-11-05 17:52:29.248695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.425 [2024-11-05 17:52:29.248704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:09.425 [2024-11-05 17:52:29.248715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:09.425 [2024-11-05 17:52:29.248723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:09.425 [2024-11-05 17:52:29.248734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:09.425 [2024-11-05 17:52:29.248744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:09.425 [2024-11-05 17:52:29.248755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:09.425 [2024-11-05 17:52:29.248771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:09.425 [2024-11-05 17:52:29.248781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:09.425 [2024-11-05 17:52:29.248789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:09.425 [2024-11-05 17:52:29.248802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:09.425 [2024-11-05 17:52:29.248811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:09.425 [2024-11-05 17:52:29.248821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.425 [2024-11-05 17:52:29.248831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:09.425 [2024-11-05 17:52:29.248841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:09.425 [2024-11-05 17:52:29.248848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.425 [2024-11-05 17:52:29.248863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:09.425 [2024-11-05 17:52:29.248871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:09.425 [2024-11-05 17:52:29.248881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.426 [2024-11-05 17:52:29.248889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:09.426 [2024-11-05 17:52:29.248898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:09.426 [2024-11-05 17:52:29.248907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.426 [2024-11-05 17:52:29.248917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:09.426 [2024-11-05 17:52:29.248924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:09.426 [2024-11-05 17:52:29.248933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.426 [2024-11-05 17:52:29.248941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:09.426 [2024-11-05 17:52:29.248952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:09.426 [2024-11-05 17:52:29.248962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.426 [2024-11-05 17:52:29.248971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:09.426 [2024-11-05 17:52:29.248978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:09.426 [2024-11-05 17:52:29.248988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:09.426 [2024-11-05 17:52:29.248996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:09.426 [2024-11-05 17:52:29.249007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:09.426 [2024-11-05 17:52:29.249016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:09.426 [2024-11-05 17:52:29.249026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:09.426 [2024-11-05 17:52:29.249033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:09.426 [2024-11-05 17:52:29.249043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.426 [2024-11-05 17:52:29.249050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:09.426 [2024-11-05 17:52:29.249080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:09.426 [2024-11-05 17:52:29.249089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.426 [2024-11-05 17:52:29.249099] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:09.426 [2024-11-05 17:52:29.249110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:09.426 [2024-11-05 17:52:29.249121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:09.426 [2024-11-05 17:52:29.249132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.426 [2024-11-05 17:52:29.249146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:09.426 [2024-11-05 17:52:29.249157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:09.426 [2024-11-05 17:52:29.249168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:09.426 [2024-11-05 17:52:29.249177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:09.426 [2024-11-05 17:52:29.249189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:09.426 [2024-11-05 17:52:29.249197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:09.426 [2024-11-05 17:52:29.249211] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:09.426 [2024-11-05 17:52:29.249222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:09.426 [2024-11-05 17:52:29.249238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:09.426 [2024-11-05 17:52:29.249247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:09.426 [2024-11-05 17:52:29.249258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:09.426 [2024-11-05 17:52:29.249266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:09.426 [2024-11-05 17:52:29.249276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:09.426 [2024-11-05 17:52:29.249284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:09.426 [2024-11-05 17:52:29.249295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:09.426 [2024-11-05 17:52:29.249304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:09.426 [2024-11-05 17:52:29.249314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:09.426 [2024-11-05 17:52:29.249322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:09.426 [2024-11-05 17:52:29.249333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:09.426 [2024-11-05 17:52:29.249342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:09.426 [2024-11-05 17:52:29.249356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:09.426 [2024-11-05 17:52:29.249364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:09.426 [2024-11-05 17:52:29.249374] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:09.426 [2024-11-05 17:52:29.249384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:09.426 [2024-11-05 17:52:29.249395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:09.426 [2024-11-05 17:52:29.249405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:09.426 [2024-11-05 17:52:29.249425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:09.426 [2024-11-05 17:52:29.249433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:09.426 [2024-11-05 17:52:29.249443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.249453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:09.426 [2024-11-05 17:52:29.249465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:18:09.426 [2024-11-05 17:52:29.249474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.426 [2024-11-05 17:52:29.267455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.267536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.426 [2024-11-05 17:52:29.267555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.876 ms 00:18:09.426 [2024-11-05 17:52:29.267566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.426 [2024-11-05 17:52:29.267765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.267779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:09.426 [2024-11-05 17:52:29.267797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:09.426 [2024-11-05 17:52:29.267806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.426 [2024-11-05 17:52:29.283127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.283200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.426 [2024-11-05 17:52:29.283219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.291 ms 00:18:09.426 [2024-11-05 17:52:29.283234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.426 [2024-11-05 17:52:29.283402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.283423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.426 [2024-11-05 17:52:29.283442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:09.426 [2024-11-05 17:52:29.283456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.426 [2024-11-05 17:52:29.284181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.284244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.426 [2024-11-05 17:52:29.284265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:18:09.426 [2024-11-05 17:52:29.284279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.426 [2024-11-05 17:52:29.284519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.284551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.426 [2024-11-05 17:52:29.284571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:18:09.426 [2024-11-05 17:52:29.284586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.426 [2024-11-05 17:52:29.295181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.295243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.426 [2024-11-05 17:52:29.295261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.547 ms 00:18:09.426 [2024-11-05 17:52:29.295271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.426 [2024-11-05 17:52:29.300412] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:09.426 [2024-11-05 17:52:29.300486] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:09.426 [2024-11-05 17:52:29.300505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.300515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:09.426 [2024-11-05 17:52:29.300528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.058 ms 00:18:09.426 [2024-11-05 17:52:29.300537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.426 [2024-11-05 17:52:29.318015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.318103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:09.426 [2024-11-05 17:52:29.318139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.349 ms 00:18:09.426 [2024-11-05 17:52:29.318155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.426 [2024-11-05 17:52:29.322605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.322670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:09.426 [2024-11-05 17:52:29.322686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.246 ms 00:18:09.426 [2024-11-05 17:52:29.322694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.426 [2024-11-05 17:52:29.326175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.426 [2024-11-05 17:52:29.326230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:09.427 [2024-11-05 17:52:29.326244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.381 ms 00:18:09.427 [2024-11-05 17:52:29.326253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.427 [2024-11-05 17:52:29.326667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.427 [2024-11-05 17:52:29.326710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:09.427 [2024-11-05 17:52:29.326724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:18:09.427 [2024-11-05 17:52:29.326732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.427 [2024-11-05 17:52:29.372836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.427 [2024-11-05 17:52:29.372934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:09.427 [2024-11-05 17:52:29.372959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.037 ms 00:18:09.427 [2024-11-05 17:52:29.372971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.427 [2024-11-05 17:52:29.383177] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:09.427 [2024-11-05 17:52:29.407493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.427 [2024-11-05 17:52:29.407590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:09.427 [2024-11-05 17:52:29.407608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.292 ms 00:18:09.427 [2024-11-05 17:52:29.407620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.427 [2024-11-05 17:52:29.407759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.427 [2024-11-05 17:52:29.407780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:09.427 [2024-11-05 17:52:29.407793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:09.427 [2024-11-05 17:52:29.407804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.427 [2024-11-05 17:52:29.407872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.427 [2024-11-05 17:52:29.407885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:09.427 [2024-11-05 17:52:29.407894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:09.427 [2024-11-05 17:52:29.407906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.427 [2024-11-05 17:52:29.407939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.427 [2024-11-05 17:52:29.407958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:09.427 [2024-11-05 17:52:29.407972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:09.427 [2024-11-05 17:52:29.407982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.427 [2024-11-05 17:52:29.408020] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:09.427 [2024-11-05 17:52:29.408034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.427 [2024-11-05 17:52:29.408042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:09.427 [2024-11-05 17:52:29.408053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:09.427 [2024-11-05 17:52:29.408060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.427 [2024-11-05 17:52:29.415912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.427 [2024-11-05 17:52:29.415989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:09.427 [2024-11-05 17:52:29.416006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.776 ms 00:18:09.427 [2024-11-05 17:52:29.416020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.427 [2024-11-05 17:52:29.416171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.427 [2024-11-05 17:52:29.416189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:09.427 [2024-11-05 17:52:29.416207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:09.427 [2024-11-05 17:52:29.416221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.689 [2024-11-05 17:52:29.418003] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:09.689 [2024-11-05 17:52:29.419662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 195.740 ms, result 0 00:18:09.689 [2024-11-05 17:52:29.422201] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:09.689 Some configs were skipped because the RPC state that can call them passed over. 00:18:09.690 17:52:29 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:09.690 [2024-11-05 17:52:29.666143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.690 [2024-11-05 17:52:29.666271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:09.690 [2024-11-05 17:52:29.666291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.981 ms 00:18:09.690 [2024-11-05 17:52:29.666305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.690 [2024-11-05 17:52:29.666349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 4.207 ms, result 0 00:18:09.690 true 00:18:09.954 17:52:29 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:09.954 [2024-11-05 17:52:29.892039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.954 [2024-11-05 17:52:29.892162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:09.954 [2024-11-05 17:52:29.892184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:18:09.954 [2024-11-05 17:52:29.892194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.954 [2024-11-05 17:52:29.892245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.867 ms, result 0 00:18:09.954 true 00:18:09.954 17:52:29 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 86812 00:18:09.954 17:52:29 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 86812 ']' 00:18:09.954 17:52:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 86812 00:18:09.954 17:52:29 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:18:09.954 17:52:29 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:09.954 17:52:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86812 00:18:09.954 killing process with pid 86812 00:18:09.954 17:52:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:09.954 17:52:29 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:09.954 17:52:29 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86812' 00:18:09.954 17:52:29 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 86812 00:18:09.954 17:52:29 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 86812 00:18:10.217 [2024-11-05 17:52:30.156614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.217 [2024-11-05 17:52:30.156735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:10.217 [2024-11-05 17:52:30.156753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:10.217 [2024-11-05 17:52:30.156767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.217 [2024-11-05 17:52:30.156793] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:10.217 [2024-11-05 17:52:30.157763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.217 [2024-11-05 17:52:30.157804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:10.217 [2024-11-05 17:52:30.157823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:18:10.217 [2024-11-05 17:52:30.157843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.217 [2024-11-05 17:52:30.158188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.217 [2024-11-05 17:52:30.158212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:10.217 [2024-11-05 17:52:30.158227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:18:10.217 [2024-11-05 17:52:30.158237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.217 [2024-11-05 17:52:30.163203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.217 [2024-11-05 17:52:30.163248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:10.217 [2024-11-05 17:52:30.163263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.937 ms 00:18:10.218 [2024-11-05 17:52:30.163276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.218 [2024-11-05 17:52:30.170482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.218 [2024-11-05 17:52:30.170557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:10.218 [2024-11-05 17:52:30.170576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.154 ms 00:18:10.218 [2024-11-05 17:52:30.170585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.218 [2024-11-05 17:52:30.173997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.218 [2024-11-05 17:52:30.174048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:10.218 [2024-11-05 17:52:30.174075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.336 ms 00:18:10.218 [2024-11-05 17:52:30.174084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.218 [2024-11-05 17:52:30.179729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.218 [2024-11-05 17:52:30.179783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:10.218 [2024-11-05 17:52:30.179797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.586 ms 00:18:10.218 [2024-11-05 17:52:30.179818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.218 [2024-11-05 17:52:30.179987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.218 [2024-11-05 17:52:30.180000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:10.218 [2024-11-05 17:52:30.180012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:10.218 [2024-11-05 17:52:30.180021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.218 [2024-11-05 17:52:30.184293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.218 [2024-11-05 17:52:30.184363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:10.218 [2024-11-05 17:52:30.184384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.237 ms 00:18:10.218 [2024-11-05 17:52:30.184393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.218 [2024-11-05 17:52:30.187506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.218 [2024-11-05 17:52:30.187557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:10.218 [2024-11-05 17:52:30.187575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:18:10.218 [2024-11-05 17:52:30.187582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.218 [2024-11-05 17:52:30.189839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.218 [2024-11-05 17:52:30.189887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:10.218 [2024-11-05 17:52:30.189902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:18:10.218 [2024-11-05 17:52:30.189910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.218 [2024-11-05 17:52:30.192877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.218 [2024-11-05 17:52:30.192928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:10.218 [2024-11-05 17:52:30.192940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:18:10.218 [2024-11-05 17:52:30.192947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.218 [2024-11-05 17:52:30.193027] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:10.218 [2024-11-05 17:52:30.193055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:10.218 [2024-11-05 17:52:30.193647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.193994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.194001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.194010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:10.219 [2024-11-05 17:52:30.194029] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:10.219 [2024-11-05 17:52:30.194039] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01e86416-23e3-4012-8803-d38bc6f06433 00:18:10.219 [2024-11-05 17:52:30.194051] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:10.219 [2024-11-05 17:52:30.194074] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:10.219 [2024-11-05 17:52:30.194084] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:10.219 [2024-11-05 17:52:30.194095] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:10.219 [2024-11-05 17:52:30.194102] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:10.219 [2024-11-05 17:52:30.194116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:10.219 [2024-11-05 17:52:30.194123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:10.219 [2024-11-05 17:52:30.194133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:10.219 [2024-11-05 17:52:30.194140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:10.219 [2024-11-05 17:52:30.194150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.219 [2024-11-05 17:52:30.194158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:10.219 [2024-11-05 17:52:30.194177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:18:10.219 [2024-11-05 17:52:30.194186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.219 [2024-11-05 17:52:30.197377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.219 [2024-11-05 17:52:30.197416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:10.219 [2024-11-05 17:52:30.197430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:18:10.219 [2024-11-05 17:52:30.197439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.219 [2024-11-05 17:52:30.197620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.219 [2024-11-05 17:52:30.197649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:10.219 [2024-11-05 17:52:30.197661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:18:10.219 [2024-11-05 17:52:30.197673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.219 [2024-11-05 17:52:30.208762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.219 [2024-11-05 17:52:30.208821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.219 [2024-11-05 17:52:30.208836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.219 [2024-11-05 17:52:30.208845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.219 [2024-11-05 17:52:30.208969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.219 [2024-11-05 17:52:30.208983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.219 [2024-11-05 17:52:30.209000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.219 [2024-11-05 17:52:30.209009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.480 [2024-11-05 17:52:30.209089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.480 [2024-11-05 17:52:30.209100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.480 [2024-11-05 17:52:30.209111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.480 [2024-11-05 17:52:30.209119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.480 [2024-11-05 17:52:30.209144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.480 [2024-11-05 17:52:30.209153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.480 [2024-11-05 17:52:30.209163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.480 [2024-11-05 17:52:30.209171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.480 [2024-11-05 17:52:30.230726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.480 [2024-11-05 17:52:30.230808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.480 [2024-11-05 17:52:30.230843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.480 [2024-11-05 17:52:30.230854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.480 [2024-11-05 17:52:30.246504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.480 [2024-11-05 17:52:30.246587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.480 [2024-11-05 17:52:30.246608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.480 [2024-11-05 17:52:30.246618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.480 [2024-11-05 17:52:30.246729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.480 [2024-11-05 17:52:30.246741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.480 [2024-11-05 17:52:30.246753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.480 [2024-11-05 17:52:30.246762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.480 [2024-11-05 17:52:30.246804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.480 [2024-11-05 17:52:30.246843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.480 [2024-11-05 17:52:30.246855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.480 [2024-11-05 17:52:30.246864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.480 [2024-11-05 17:52:30.246962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.480 [2024-11-05 17:52:30.246975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.480 [2024-11-05 17:52:30.246987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.480 [2024-11-05 17:52:30.246995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.480 [2024-11-05 17:52:30.247046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.480 [2024-11-05 17:52:30.247057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:10.480 [2024-11-05 17:52:30.247093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.480 [2024-11-05 17:52:30.247101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.480 [2024-11-05 17:52:30.247169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.480 [2024-11-05 17:52:30.247179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.480 [2024-11-05 17:52:30.247197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.480 [2024-11-05 17:52:30.247206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.480 [2024-11-05 17:52:30.247275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.480 [2024-11-05 17:52:30.247296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.480 [2024-11-05 17:52:30.247312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.480 [2024-11-05 17:52:30.247321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.480 [2024-11-05 17:52:30.247518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 90.857 ms, result 0 00:18:10.741 17:52:30 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.741 [2024-11-05 17:52:30.649740] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:18:10.741 [2024-11-05 17:52:30.649939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86859 ] 00:18:11.003 [2024-11-05 17:52:30.786000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:11.003 [2024-11-05 17:52:30.817624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.003 [2024-11-05 17:52:30.860022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.267 [2024-11-05 17:52:31.011668] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:11.267 [2024-11-05 17:52:31.011782] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:11.267 [2024-11-05 17:52:31.176731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.267 [2024-11-05 17:52:31.176843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:11.267 [2024-11-05 17:52:31.176863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:11.267 [2024-11-05 17:52:31.176874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.267 [2024-11-05 17:52:31.179761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.267 [2024-11-05 17:52:31.179822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:11.267 [2024-11-05 17:52:31.179837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:18:11.267 [2024-11-05 17:52:31.179846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.267 [2024-11-05 17:52:31.179967] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:11.267 [2024-11-05 17:52:31.180664] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:11.267 [2024-11-05 17:52:31.180734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.267 [2024-11-05 17:52:31.180746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:11.267 [2024-11-05 17:52:31.180763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:18:11.267 [2024-11-05 17:52:31.180772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.267 [2024-11-05 17:52:31.183326] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:11.267 [2024-11-05 17:52:31.188322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.267 [2024-11-05 17:52:31.188383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:11.267 [2024-11-05 17:52:31.188398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.999 ms 00:18:11.267 [2024-11-05 17:52:31.188407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.267 [2024-11-05 17:52:31.188509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.267 [2024-11-05 17:52:31.188526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:11.267 [2024-11-05 17:52:31.188537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:11.267 [2024-11-05 17:52:31.188545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.267 [2024-11-05 17:52:31.200133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.267 [2024-11-05 17:52:31.200189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:11.267 [2024-11-05 17:52:31.200205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.533 ms 00:18:11.267 [2024-11-05 17:52:31.200224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.267 [2024-11-05 17:52:31.200408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.267 [2024-11-05 17:52:31.200422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:11.267 [2024-11-05 17:52:31.200433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:11.267 [2024-11-05 17:52:31.200447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.267 [2024-11-05 17:52:31.200482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.267 [2024-11-05 17:52:31.200493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:11.267 [2024-11-05 17:52:31.200506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:11.267 [2024-11-05 17:52:31.200519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.267 [2024-11-05 17:52:31.200550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:11.267 [2024-11-05 17:52:31.203271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.267 [2024-11-05 17:52:31.203317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:11.267 [2024-11-05 17:52:31.203336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.733 ms 00:18:11.267 [2024-11-05 17:52:31.203350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.267 [2024-11-05 17:52:31.203408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.267 [2024-11-05 17:52:31.203421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:11.267 [2024-11-05 17:52:31.203432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:11.267 [2024-11-05 17:52:31.203442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.267 [2024-11-05 17:52:31.203472] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:11.267 [2024-11-05 17:52:31.203500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:11.267 [2024-11-05 17:52:31.203550] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:11.267 [2024-11-05 17:52:31.203574] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:11.267 [2024-11-05 17:52:31.203690] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:11.267 [2024-11-05 17:52:31.203707] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:11.267 [2024-11-05 17:52:31.203720] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:11.267 [2024-11-05 17:52:31.203735] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:11.267 [2024-11-05 17:52:31.203747] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:11.267 [2024-11-05 17:52:31.203758] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:11.267 [2024-11-05 17:52:31.203769] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:11.267 [2024-11-05 17:52:31.203786] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:11.268 [2024-11-05 17:52:31.203799] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:11.268 [2024-11-05 17:52:31.203809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.268 [2024-11-05 17:52:31.203819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:11.268 [2024-11-05 17:52:31.203828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:18:11.268 [2024-11-05 17:52:31.203837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.268 [2024-11-05 17:52:31.203930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.268 [2024-11-05 17:52:31.203944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:11.268 [2024-11-05 17:52:31.203953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:11.268 [2024-11-05 17:52:31.203962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.268 [2024-11-05 17:52:31.204088] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:11.268 [2024-11-05 17:52:31.204110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:11.268 [2024-11-05 17:52:31.204122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:11.268 [2024-11-05 17:52:31.204132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:11.268 [2024-11-05 17:52:31.204158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:11.268 [2024-11-05 17:52:31.204190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:11.268 [2024-11-05 17:52:31.204199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:11.268 [2024-11-05 17:52:31.204218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:11.268 [2024-11-05 17:52:31.204226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:11.268 [2024-11-05 17:52:31.204234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:11.268 [2024-11-05 17:52:31.204243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:11.268 [2024-11-05 17:52:31.204252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:11.268 [2024-11-05 17:52:31.204260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:11.268 [2024-11-05 17:52:31.204278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:11.268 [2024-11-05 17:52:31.204285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:11.268 [2024-11-05 17:52:31.204302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.268 [2024-11-05 17:52:31.204327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:11.268 [2024-11-05 17:52:31.204336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.268 [2024-11-05 17:52:31.204353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:11.268 [2024-11-05 17:52:31.204362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.268 [2024-11-05 17:52:31.204379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:11.268 [2024-11-05 17:52:31.204387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.268 [2024-11-05 17:52:31.204405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:11.268 [2024-11-05 17:52:31.204412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:11.268 [2024-11-05 17:52:31.204428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:11.268 [2024-11-05 17:52:31.204436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:11.268 [2024-11-05 17:52:31.204444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:11.268 [2024-11-05 17:52:31.204452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:11.268 [2024-11-05 17:52:31.204462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:11.268 [2024-11-05 17:52:31.204470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:11.268 [2024-11-05 17:52:31.204486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:11.268 [2024-11-05 17:52:31.204494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204501] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:11.268 [2024-11-05 17:52:31.204511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:11.268 [2024-11-05 17:52:31.204519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:11.268 [2024-11-05 17:52:31.204528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.268 [2024-11-05 17:52:31.204537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:11.268 [2024-11-05 17:52:31.204545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:11.268 [2024-11-05 17:52:31.204553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:11.268 [2024-11-05 17:52:31.204560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:11.268 [2024-11-05 17:52:31.204568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:11.268 [2024-11-05 17:52:31.204575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:11.268 [2024-11-05 17:52:31.204588] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:11.268 [2024-11-05 17:52:31.204604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:11.268 [2024-11-05 17:52:31.204617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:11.268 [2024-11-05 17:52:31.204626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:11.268 [2024-11-05 17:52:31.204632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:11.268 [2024-11-05 17:52:31.204640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:11.268 [2024-11-05 17:52:31.204648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:11.268 [2024-11-05 17:52:31.204655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:11.268 [2024-11-05 17:52:31.204664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:11.268 [2024-11-05 17:52:31.204671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:11.268 [2024-11-05 17:52:31.204679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:11.268 [2024-11-05 17:52:31.204688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:11.268 [2024-11-05 17:52:31.204695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:11.268 [2024-11-05 17:52:31.204707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:11.268 [2024-11-05 17:52:31.204716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:11.268 [2024-11-05 17:52:31.204724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:11.268 [2024-11-05 17:52:31.204733] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:11.268 [2024-11-05 17:52:31.204749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:11.268 [2024-11-05 17:52:31.204762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:11.268 [2024-11-05 17:52:31.204771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:11.268 [2024-11-05 17:52:31.204780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:11.268 [2024-11-05 17:52:31.204788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:11.268 [2024-11-05 17:52:31.204797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.268 [2024-11-05 17:52:31.204805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:11.268 [2024-11-05 17:52:31.204815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:18:11.268 [2024-11-05 17:52:31.204827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.268 [2024-11-05 17:52:31.225285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.268 [2024-11-05 17:52:31.225349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:11.268 [2024-11-05 17:52:31.225365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.373 ms 00:18:11.268 [2024-11-05 17:52:31.225384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.268 [2024-11-05 17:52:31.225566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.268 [2024-11-05 17:52:31.225579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:11.268 [2024-11-05 17:52:31.225590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:11.268 [2024-11-05 17:52:31.225600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.268 [2024-11-05 17:52:31.257004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.268 [2024-11-05 17:52:31.257124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.268 [2024-11-05 17:52:31.257149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.372 ms 00:18:11.268 [2024-11-05 17:52:31.257175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.268 [2024-11-05 17:52:31.257356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.269 [2024-11-05 17:52:31.257384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.269 [2024-11-05 17:52:31.257400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:11.269 [2024-11-05 17:52:31.257414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.269 [2024-11-05 17:52:31.258201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.269 [2024-11-05 17:52:31.258253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.269 [2024-11-05 17:52:31.258270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:18:11.269 [2024-11-05 17:52:31.258282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.269 [2024-11-05 17:52:31.258512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.269 [2024-11-05 17:52:31.258528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.269 [2024-11-05 17:52:31.258547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:18:11.269 [2024-11-05 17:52:31.258558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.531 [2024-11-05 17:52:31.270786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.531 [2024-11-05 17:52:31.270867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.531 [2024-11-05 17:52:31.270882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.191 ms 00:18:11.531 [2024-11-05 17:52:31.270898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.531 [2024-11-05 17:52:31.275946] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:11.531 [2024-11-05 17:52:31.276006] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:11.531 [2024-11-05 17:52:31.276022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.531 [2024-11-05 17:52:31.276034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:11.531 [2024-11-05 17:52:31.276046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.885 ms 00:18:11.531 [2024-11-05 17:52:31.276055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.531 [2024-11-05 17:52:31.292682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.531 [2024-11-05 17:52:31.292744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:11.531 [2024-11-05 17:52:31.292759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.496 ms 00:18:11.531 [2024-11-05 17:52:31.292770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.531 [2024-11-05 17:52:31.296413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.531 [2024-11-05 17:52:31.296466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:11.531 [2024-11-05 17:52:31.296478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:18:11.531 [2024-11-05 17:52:31.296486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.531 [2024-11-05 17:52:31.299110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.531 [2024-11-05 17:52:31.299156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:11.531 [2024-11-05 17:52:31.299167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:18:11.531 [2024-11-05 17:52:31.299175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.531 [2024-11-05 17:52:31.299589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.531 [2024-11-05 17:52:31.299624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:11.531 [2024-11-05 17:52:31.299636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:18:11.531 [2024-11-05 17:52:31.299646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.532 [2024-11-05 17:52:31.333323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.532 [2024-11-05 17:52:31.333464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:11.532 [2024-11-05 17:52:31.333490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.645 ms 00:18:11.532 [2024-11-05 17:52:31.333511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.532 [2024-11-05 17:52:31.343462] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:11.532 [2024-11-05 17:52:31.370719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.532 [2024-11-05 17:52:31.370822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:11.532 [2024-11-05 17:52:31.370843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.000 ms 00:18:11.532 [2024-11-05 17:52:31.370854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.532 [2024-11-05 17:52:31.371052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.532 [2024-11-05 17:52:31.371101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:11.532 [2024-11-05 17:52:31.371114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:11.532 [2024-11-05 17:52:31.371124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.532 [2024-11-05 17:52:31.371244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.532 [2024-11-05 17:52:31.371257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:11.532 [2024-11-05 17:52:31.371267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:11.532 [2024-11-05 17:52:31.371277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.532 [2024-11-05 17:52:31.371310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.532 [2024-11-05 17:52:31.371321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:11.532 [2024-11-05 17:52:31.371335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:11.532 [2024-11-05 17:52:31.371344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.532 [2024-11-05 17:52:31.371391] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:11.532 [2024-11-05 17:52:31.371403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.532 [2024-11-05 17:52:31.371412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:11.532 [2024-11-05 17:52:31.371423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:11.532 [2024-11-05 17:52:31.371432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.532 [2024-11-05 17:52:31.379090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.532 [2024-11-05 17:52:31.379169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:11.532 [2024-11-05 17:52:31.379187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.627 ms 00:18:11.532 [2024-11-05 17:52:31.379205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.532 [2024-11-05 17:52:31.379329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.532 [2024-11-05 17:52:31.379343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:11.532 [2024-11-05 17:52:31.379354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:11.532 [2024-11-05 17:52:31.379364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.532 [2024-11-05 17:52:31.380845] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:11.532 [2024-11-05 17:52:31.382356] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 203.739 ms, result 0 00:18:11.532 [2024-11-05 17:52:31.383809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:11.532 [2024-11-05 17:52:31.391118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:12.496  [2024-11-05T17:52:33.879Z] Copying: 12/256 [MB] (12 MBps) [2024-11-05T17:52:34.467Z] Copying: 23/256 [MB] (10 MBps) [2024-11-05T17:52:35.850Z] Copying: 33/256 [MB] (10 MBps) [2024-11-05T17:52:36.795Z] Copying: 44/256 [MB] (10 MBps) [2024-11-05T17:52:37.754Z] Copying: 55376/262144 [kB] (10216 kBps) [2024-11-05T17:52:38.696Z] Copying: 65480/262144 [kB] (10104 kBps) [2024-11-05T17:52:39.638Z] Copying: 74/256 [MB] (10 MBps) [2024-11-05T17:52:40.599Z] Copying: 85780/262144 [kB] (9832 kBps) [2024-11-05T17:52:41.542Z] Copying: 95868/262144 [kB] (10088 kBps) [2024-11-05T17:52:42.500Z] Copying: 105/256 [MB] (11 MBps) [2024-11-05T17:52:43.886Z] Copying: 116/256 [MB] (10 MBps) [2024-11-05T17:52:44.474Z] Copying: 126/256 [MB] (10 MBps) [2024-11-05T17:52:45.878Z] Copying: 139/256 [MB] (12 MBps) [2024-11-05T17:52:46.817Z] Copying: 151/256 [MB] (11 MBps) [2024-11-05T17:52:47.761Z] Copying: 162/256 [MB] (11 MBps) [2024-11-05T17:52:48.702Z] Copying: 172/256 [MB] (10 MBps) [2024-11-05T17:52:49.644Z] Copying: 186416/262144 [kB] (9868 kBps) [2024-11-05T17:52:50.587Z] Copying: 193/256 [MB] (11 MBps) [2024-11-05T17:52:51.531Z] Copying: 204/256 [MB] (10 MBps) [2024-11-05T17:52:52.475Z] Copying: 219272/262144 [kB] (10232 kBps) [2024-11-05T17:52:53.857Z] Copying: 229376/262144 [kB] (10104 kBps) [2024-11-05T17:52:54.799Z] Copying: 239188/262144 [kB] (9812 kBps) [2024-11-05T17:52:55.370Z] Copying: 247/256 [MB] (13 MBps) [2024-11-05T17:52:55.632Z] Copying: 256/256 [MB] (average 10 MBps)[2024-11-05 17:52:55.541570] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.641 [2024-11-05 17:52:55.542747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.641 [2024-11-05 17:52:55.542779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:35.641 [2024-11-05 17:52:55.542793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:35.641 [2024-11-05 17:52:55.542802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.641 [2024-11-05 17:52:55.542834] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:35.641 [2024-11-05 17:52:55.543293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.641 [2024-11-05 17:52:55.543317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:35.641 [2024-11-05 17:52:55.543331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:18:35.641 [2024-11-05 17:52:55.543339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.641 [2024-11-05 17:52:55.543597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.641 [2024-11-05 17:52:55.543615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:35.641 [2024-11-05 17:52:55.543624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:18:35.641 [2024-11-05 17:52:55.543633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.641 [2024-11-05 17:52:55.547791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.641 [2024-11-05 17:52:55.547816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:35.641 [2024-11-05 17:52:55.547826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.141 ms 00:18:35.641 [2024-11-05 17:52:55.547835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.641 [2024-11-05 17:52:55.554787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.641 [2024-11-05 17:52:55.554814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:35.641 [2024-11-05 17:52:55.554914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.932 ms 00:18:35.641 [2024-11-05 17:52:55.554921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.641 [2024-11-05 17:52:55.557384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.642 [2024-11-05 17:52:55.557416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:35.642 [2024-11-05 17:52:55.557433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.404 ms 00:18:35.642 [2024-11-05 17:52:55.557440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.642 [2024-11-05 17:52:55.561403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.642 [2024-11-05 17:52:55.561439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:35.642 [2024-11-05 17:52:55.561448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.930 ms 00:18:35.642 [2024-11-05 17:52:55.561456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.642 [2024-11-05 17:52:55.561573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.642 [2024-11-05 17:52:55.561581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:35.642 [2024-11-05 17:52:55.561594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:35.642 [2024-11-05 17:52:55.561601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.642 [2024-11-05 17:52:55.564461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.642 [2024-11-05 17:52:55.564505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:35.642 [2024-11-05 17:52:55.564516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:18:35.642 [2024-11-05 17:52:55.564524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.642 [2024-11-05 17:52:55.566990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.642 [2024-11-05 17:52:55.567022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:35.642 [2024-11-05 17:52:55.567031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.431 ms 00:18:35.642 [2024-11-05 17:52:55.567038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.642 [2024-11-05 17:52:55.568896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.642 [2024-11-05 17:52:55.568925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:35.642 [2024-11-05 17:52:55.568933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.812 ms 00:18:35.642 [2024-11-05 17:52:55.568940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.642 [2024-11-05 17:52:55.571609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.642 [2024-11-05 17:52:55.571652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:35.642 [2024-11-05 17:52:55.571663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.608 ms 00:18:35.642 [2024-11-05 17:52:55.571671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.642 [2024-11-05 17:52:55.571705] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:35.642 [2024-11-05 17:52:55.571719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.571997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:35.642 [2024-11-05 17:52:55.572230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:35.643 [2024-11-05 17:52:55.572478] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:35.643 [2024-11-05 17:52:55.572494] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01e86416-23e3-4012-8803-d38bc6f06433 00:18:35.643 [2024-11-05 17:52:55.572502] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:35.643 [2024-11-05 17:52:55.572509] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:35.643 [2024-11-05 17:52:55.572517] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:35.643 [2024-11-05 17:52:55.572524] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:35.643 [2024-11-05 17:52:55.572530] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:35.643 [2024-11-05 17:52:55.572541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:35.643 [2024-11-05 17:52:55.572548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:35.643 [2024-11-05 17:52:55.572554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:35.643 [2024-11-05 17:52:55.572562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:35.643 [2024-11-05 17:52:55.572569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.643 [2024-11-05 17:52:55.572577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:35.643 [2024-11-05 17:52:55.572585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:18:35.643 [2024-11-05 17:52:55.572593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.574037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.643 [2024-11-05 17:52:55.574062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:35.643 [2024-11-05 17:52:55.574083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:18:35.643 [2024-11-05 17:52:55.574094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.574171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.643 [2024-11-05 17:52:55.574181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:35.643 [2024-11-05 17:52:55.574190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:35.643 [2024-11-05 17:52:55.574198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.579494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.579529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.643 [2024-11-05 17:52:55.579543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.579555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.579638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.579648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.643 [2024-11-05 17:52:55.579656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.579663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.579702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.579711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.643 [2024-11-05 17:52:55.579719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.579729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.579746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.579753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.643 [2024-11-05 17:52:55.579761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.579767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.588512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.588557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.643 [2024-11-05 17:52:55.588567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.588579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.595575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.595617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.643 [2024-11-05 17:52:55.595627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.595635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.595681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.595691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.643 [2024-11-05 17:52:55.595699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.595716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.595746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.595754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.643 [2024-11-05 17:52:55.595762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.595769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.595834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.595844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.643 [2024-11-05 17:52:55.595855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.595862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.595892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.595902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:35.643 [2024-11-05 17:52:55.595910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.595917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.595952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.595960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.643 [2024-11-05 17:52:55.595968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.595975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.596019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.643 [2024-11-05 17:52:55.596035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.643 [2024-11-05 17:52:55.596043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.643 [2024-11-05 17:52:55.596050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.643 [2024-11-05 17:52:55.596197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.428 ms, result 0 00:18:35.904 00:18:35.904 00:18:35.904 17:52:55 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:36.476 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:36.476 17:52:56 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:36.476 17:52:56 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:36.476 17:52:56 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:36.476 17:52:56 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:36.476 17:52:56 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:36.476 17:52:56 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:36.476 17:52:56 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 86812 00:18:36.476 17:52:56 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 86812 ']' 00:18:36.476 Process with pid 86812 is not found 00:18:36.476 17:52:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 86812 00:18:36.476 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (86812) - No such process 00:18:36.476 17:52:56 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 86812 is not found' 00:18:36.476 ************************************ 00:18:36.476 END TEST ftl_trim 00:18:36.476 00:18:36.476 real 1m24.931s 00:18:36.476 user 2m0.126s 00:18:36.476 sys 0m5.632s 00:18:36.476 17:52:56 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:36.476 17:52:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:36.476 ************************************ 00:18:36.476 17:52:56 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:36.476 17:52:56 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:36.476 17:52:56 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:36.476 17:52:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:36.476 ************************************ 00:18:36.476 START TEST ftl_restore 00:18:36.476 ************************************ 00:18:36.476 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:36.737 * Looking for test storage... 00:18:36.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.737 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:36.737 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:18:36.737 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:36.737 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.737 17:52:56 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:36.737 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.737 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:36.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.737 --rc genhtml_branch_coverage=1 00:18:36.737 --rc genhtml_function_coverage=1 00:18:36.737 --rc genhtml_legend=1 00:18:36.737 --rc geninfo_all_blocks=1 00:18:36.737 --rc geninfo_unexecuted_blocks=1 00:18:36.737 00:18:36.737 ' 00:18:36.737 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:36.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.737 --rc genhtml_branch_coverage=1 00:18:36.737 --rc genhtml_function_coverage=1 00:18:36.737 --rc genhtml_legend=1 00:18:36.737 --rc geninfo_all_blocks=1 00:18:36.737 --rc geninfo_unexecuted_blocks=1 00:18:36.737 00:18:36.737 ' 00:18:36.737 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:36.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.737 --rc genhtml_branch_coverage=1 00:18:36.737 --rc genhtml_function_coverage=1 00:18:36.737 --rc genhtml_legend=1 00:18:36.737 --rc geninfo_all_blocks=1 00:18:36.737 --rc geninfo_unexecuted_blocks=1 00:18:36.738 00:18:36.738 ' 00:18:36.738 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:36.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.738 --rc genhtml_branch_coverage=1 00:18:36.738 --rc genhtml_function_coverage=1 00:18:36.738 --rc genhtml_legend=1 00:18:36.738 --rc geninfo_all_blocks=1 00:18:36.738 --rc geninfo_unexecuted_blocks=1 00:18:36.738 00:18:36.738 ' 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:36.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.6P3zcEIzZG 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=87189 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 87189 00:18:36.738 17:52:56 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.738 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 87189 ']' 00:18:36.738 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.738 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.738 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.738 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.738 17:52:56 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:36.738 [2024-11-05 17:52:56.713631] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:18:36.738 [2024-11-05 17:52:56.713762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87189 ] 00:18:36.999 [2024-11-05 17:52:56.842881] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:36.999 [2024-11-05 17:52:56.873299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.999 [2024-11-05 17:52:56.893281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.939 17:52:57 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.939 17:52:57 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:18:37.939 17:52:57 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:37.939 17:52:57 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:37.939 17:52:57 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:37.939 17:52:57 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:37.939 17:52:57 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:37.939 17:52:57 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:37.939 17:52:57 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:37.939 17:52:57 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:37.939 17:52:57 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:37.939 17:52:57 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:37.939 17:52:57 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:37.939 17:52:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:37.939 17:52:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:37.939 17:52:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:38.200 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:38.200 { 00:18:38.200 "name": "nvme0n1", 00:18:38.200 "aliases": [ 00:18:38.200 "25fe32b4-3354-4367-8b15-953dd5c1600e" 00:18:38.200 ], 00:18:38.200 "product_name": "NVMe disk", 00:18:38.200 "block_size": 4096, 00:18:38.200 "num_blocks": 1310720, 00:18:38.200 "uuid": "25fe32b4-3354-4367-8b15-953dd5c1600e", 00:18:38.200 "numa_id": -1, 00:18:38.200 "assigned_rate_limits": { 00:18:38.200 "rw_ios_per_sec": 0, 00:18:38.200 "rw_mbytes_per_sec": 0, 00:18:38.200 "r_mbytes_per_sec": 0, 00:18:38.200 "w_mbytes_per_sec": 0 00:18:38.200 }, 00:18:38.200 "claimed": true, 00:18:38.200 "claim_type": "read_many_write_one", 00:18:38.200 "zoned": false, 00:18:38.200 "supported_io_types": { 00:18:38.200 "read": true, 00:18:38.200 "write": true, 00:18:38.200 "unmap": true, 00:18:38.200 "flush": true, 00:18:38.200 "reset": true, 00:18:38.200 "nvme_admin": true, 00:18:38.200 "nvme_io": true, 00:18:38.200 "nvme_io_md": false, 00:18:38.200 "write_zeroes": true, 00:18:38.200 "zcopy": false, 00:18:38.200 "get_zone_info": false, 00:18:38.200 "zone_management": false, 00:18:38.200 "zone_append": false, 00:18:38.200 "compare": true, 00:18:38.200 "compare_and_write": false, 00:18:38.200 "abort": true, 00:18:38.200 "seek_hole": false, 00:18:38.200 "seek_data": false, 00:18:38.200 "copy": true, 00:18:38.200 "nvme_iov_md": false 00:18:38.200 }, 00:18:38.200 "driver_specific": { 00:18:38.200 "nvme": [ 00:18:38.200 { 00:18:38.200 "pci_address": "0000:00:11.0", 00:18:38.200 "trid": { 00:18:38.200 "trtype": "PCIe", 00:18:38.200 "traddr": "0000:00:11.0" 00:18:38.200 }, 00:18:38.200 "ctrlr_data": { 00:18:38.200 "cntlid": 0, 00:18:38.200 "vendor_id": "0x1b36", 00:18:38.200 "model_number": "QEMU NVMe Ctrl", 00:18:38.200 "serial_number": "12341", 00:18:38.200 "firmware_revision": "8.0.0", 00:18:38.200 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:38.200 "oacs": { 00:18:38.200 "security": 0, 00:18:38.200 "format": 1, 00:18:38.200 "firmware": 0, 00:18:38.200 "ns_manage": 1 00:18:38.200 }, 00:18:38.200 "multi_ctrlr": false, 00:18:38.200 "ana_reporting": false 00:18:38.200 }, 00:18:38.200 "vs": { 00:18:38.200 "nvme_version": "1.4" 00:18:38.200 }, 00:18:38.200 "ns_data": { 00:18:38.200 "id": 1, 00:18:38.200 "can_share": false 00:18:38.200 } 00:18:38.200 } 00:18:38.200 ], 00:18:38.200 "mp_policy": "active_passive" 00:18:38.200 } 00:18:38.200 } 00:18:38.200 ]' 00:18:38.200 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:38.200 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:38.200 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:38.200 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:38.200 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:38.200 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:18:38.200 17:52:58 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:38.200 17:52:58 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:38.200 17:52:58 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:38.200 17:52:58 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:38.200 17:52:58 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:38.460 17:52:58 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=ecc95f44-c7d9-40a9-a475-a2feb3365049 00:18:38.460 17:52:58 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:38.460 17:52:58 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ecc95f44-c7d9-40a9-a475-a2feb3365049 00:18:38.722 17:52:58 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:38.983 17:52:58 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=48eece26-898b-44f6-952c-0c08e72728c6 00:18:38.983 17:52:58 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 48eece26-898b-44f6-952c-0c08e72728c6 00:18:39.244 17:52:58 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=45626d5f-0a39-424e-a071-efd83a346f77 00:18:39.244 17:52:58 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:39.244 17:52:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 45626d5f-0a39-424e-a071-efd83a346f77 00:18:39.244 17:52:58 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:39.244 17:52:58 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:39.244 17:52:58 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=45626d5f-0a39-424e-a071-efd83a346f77 00:18:39.244 17:52:58 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:39.244 17:52:58 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 45626d5f-0a39-424e-a071-efd83a346f77 00:18:39.244 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=45626d5f-0a39-424e-a071-efd83a346f77 00:18:39.244 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:39.244 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:39.244 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:39.244 17:52:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45626d5f-0a39-424e-a071-efd83a346f77 00:18:39.244 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:39.244 { 00:18:39.244 "name": "45626d5f-0a39-424e-a071-efd83a346f77", 00:18:39.244 "aliases": [ 00:18:39.244 "lvs/nvme0n1p0" 00:18:39.244 ], 00:18:39.244 "product_name": "Logical Volume", 00:18:39.244 "block_size": 4096, 00:18:39.244 "num_blocks": 26476544, 00:18:39.244 "uuid": "45626d5f-0a39-424e-a071-efd83a346f77", 00:18:39.244 "assigned_rate_limits": { 00:18:39.244 "rw_ios_per_sec": 0, 00:18:39.244 "rw_mbytes_per_sec": 0, 00:18:39.244 "r_mbytes_per_sec": 0, 00:18:39.244 "w_mbytes_per_sec": 0 00:18:39.244 }, 00:18:39.244 "claimed": false, 00:18:39.244 "zoned": false, 00:18:39.244 "supported_io_types": { 00:18:39.244 "read": true, 00:18:39.244 "write": true, 00:18:39.244 "unmap": true, 00:18:39.244 "flush": false, 00:18:39.244 "reset": true, 00:18:39.244 "nvme_admin": false, 00:18:39.244 "nvme_io": false, 00:18:39.244 "nvme_io_md": false, 00:18:39.244 "write_zeroes": true, 00:18:39.244 "zcopy": false, 00:18:39.244 "get_zone_info": false, 00:18:39.244 "zone_management": false, 00:18:39.244 "zone_append": false, 00:18:39.244 "compare": false, 00:18:39.244 "compare_and_write": false, 00:18:39.244 "abort": false, 00:18:39.244 "seek_hole": true, 00:18:39.244 "seek_data": true, 00:18:39.245 "copy": false, 00:18:39.245 "nvme_iov_md": false 00:18:39.245 }, 00:18:39.245 "driver_specific": { 00:18:39.245 "lvol": { 00:18:39.245 "lvol_store_uuid": "48eece26-898b-44f6-952c-0c08e72728c6", 00:18:39.245 "base_bdev": "nvme0n1", 00:18:39.245 "thin_provision": true, 00:18:39.245 "num_allocated_clusters": 0, 00:18:39.245 "snapshot": false, 00:18:39.245 "clone": false, 00:18:39.245 "esnap_clone": false 00:18:39.245 } 00:18:39.245 } 00:18:39.245 } 00:18:39.245 ]' 00:18:39.245 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:39.245 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:39.245 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:39.505 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:39.505 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:39.505 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:39.505 17:52:59 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:39.505 17:52:59 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:39.505 17:52:59 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:39.769 17:52:59 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:39.769 17:52:59 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:39.769 17:52:59 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 45626d5f-0a39-424e-a071-efd83a346f77 00:18:39.769 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=45626d5f-0a39-424e-a071-efd83a346f77 00:18:39.769 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:39.770 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:39.770 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:39.770 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45626d5f-0a39-424e-a071-efd83a346f77 00:18:39.770 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:39.770 { 00:18:39.770 "name": "45626d5f-0a39-424e-a071-efd83a346f77", 00:18:39.770 "aliases": [ 00:18:39.770 "lvs/nvme0n1p0" 00:18:39.770 ], 00:18:39.770 "product_name": "Logical Volume", 00:18:39.770 "block_size": 4096, 00:18:39.770 "num_blocks": 26476544, 00:18:39.770 "uuid": "45626d5f-0a39-424e-a071-efd83a346f77", 00:18:39.770 "assigned_rate_limits": { 00:18:39.770 "rw_ios_per_sec": 0, 00:18:39.770 "rw_mbytes_per_sec": 0, 00:18:39.770 "r_mbytes_per_sec": 0, 00:18:39.770 "w_mbytes_per_sec": 0 00:18:39.770 }, 00:18:39.770 "claimed": false, 00:18:39.770 "zoned": false, 00:18:39.770 "supported_io_types": { 00:18:39.770 "read": true, 00:18:39.770 "write": true, 00:18:39.770 "unmap": true, 00:18:39.770 "flush": false, 00:18:39.770 "reset": true, 00:18:39.770 "nvme_admin": false, 00:18:39.770 "nvme_io": false, 00:18:39.770 "nvme_io_md": false, 00:18:39.770 "write_zeroes": true, 00:18:39.770 "zcopy": false, 00:18:39.770 "get_zone_info": false, 00:18:39.770 "zone_management": false, 00:18:39.770 "zone_append": false, 00:18:39.770 "compare": false, 00:18:39.770 "compare_and_write": false, 00:18:39.770 "abort": false, 00:18:39.770 "seek_hole": true, 00:18:39.770 "seek_data": true, 00:18:39.770 "copy": false, 00:18:39.770 "nvme_iov_md": false 00:18:39.770 }, 00:18:39.770 "driver_specific": { 00:18:39.770 "lvol": { 00:18:39.770 "lvol_store_uuid": "48eece26-898b-44f6-952c-0c08e72728c6", 00:18:39.770 "base_bdev": "nvme0n1", 00:18:39.770 "thin_provision": true, 00:18:39.770 "num_allocated_clusters": 0, 00:18:39.770 "snapshot": false, 00:18:39.770 "clone": false, 00:18:39.770 "esnap_clone": false 00:18:39.770 } 00:18:39.770 } 00:18:39.770 } 00:18:39.770 ]' 00:18:39.770 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:39.771 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:40.064 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:40.064 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:40.064 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:40.064 17:52:59 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:40.064 17:52:59 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:40.064 17:52:59 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:40.064 17:52:59 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:40.064 17:53:00 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 45626d5f-0a39-424e-a071-efd83a346f77 00:18:40.064 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=45626d5f-0a39-424e-a071-efd83a346f77 00:18:40.064 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:40.064 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:40.064 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:40.064 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45626d5f-0a39-424e-a071-efd83a346f77 00:18:40.325 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:40.325 { 00:18:40.325 "name": "45626d5f-0a39-424e-a071-efd83a346f77", 00:18:40.325 "aliases": [ 00:18:40.325 "lvs/nvme0n1p0" 00:18:40.325 ], 00:18:40.325 "product_name": "Logical Volume", 00:18:40.325 "block_size": 4096, 00:18:40.325 "num_blocks": 26476544, 00:18:40.325 "uuid": "45626d5f-0a39-424e-a071-efd83a346f77", 00:18:40.325 "assigned_rate_limits": { 00:18:40.325 "rw_ios_per_sec": 0, 00:18:40.325 "rw_mbytes_per_sec": 0, 00:18:40.325 "r_mbytes_per_sec": 0, 00:18:40.325 "w_mbytes_per_sec": 0 00:18:40.325 }, 00:18:40.325 "claimed": false, 00:18:40.325 "zoned": false, 00:18:40.325 "supported_io_types": { 00:18:40.325 "read": true, 00:18:40.325 "write": true, 00:18:40.325 "unmap": true, 00:18:40.325 "flush": false, 00:18:40.325 "reset": true, 00:18:40.325 "nvme_admin": false, 00:18:40.325 "nvme_io": false, 00:18:40.325 "nvme_io_md": false, 00:18:40.325 "write_zeroes": true, 00:18:40.325 "zcopy": false, 00:18:40.325 "get_zone_info": false, 00:18:40.325 "zone_management": false, 00:18:40.325 "zone_append": false, 00:18:40.325 "compare": false, 00:18:40.325 "compare_and_write": false, 00:18:40.325 "abort": false, 00:18:40.325 "seek_hole": true, 00:18:40.325 "seek_data": true, 00:18:40.325 "copy": false, 00:18:40.325 "nvme_iov_md": false 00:18:40.325 }, 00:18:40.325 "driver_specific": { 00:18:40.325 "lvol": { 00:18:40.325 "lvol_store_uuid": "48eece26-898b-44f6-952c-0c08e72728c6", 00:18:40.325 "base_bdev": "nvme0n1", 00:18:40.325 "thin_provision": true, 00:18:40.325 "num_allocated_clusters": 0, 00:18:40.325 "snapshot": false, 00:18:40.325 "clone": false, 00:18:40.325 "esnap_clone": false 00:18:40.325 } 00:18:40.325 } 00:18:40.325 } 00:18:40.325 ]' 00:18:40.325 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:40.325 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:40.325 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:40.325 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:40.325 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:40.325 17:53:00 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:40.325 17:53:00 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:40.325 17:53:00 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 45626d5f-0a39-424e-a071-efd83a346f77 --l2p_dram_limit 10' 00:18:40.325 17:53:00 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:40.325 17:53:00 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:40.325 17:53:00 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:40.325 17:53:00 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:40.326 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:40.326 17:53:00 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 45626d5f-0a39-424e-a071-efd83a346f77 --l2p_dram_limit 10 -c nvc0n1p0 00:18:40.588 [2024-11-05 17:53:00.460998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.588 [2024-11-05 17:53:00.461056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.588 [2024-11-05 17:53:00.461081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:40.588 [2024-11-05 17:53:00.461090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.588 [2024-11-05 17:53:00.461154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.588 [2024-11-05 17:53:00.461163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.588 [2024-11-05 17:53:00.461183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:40.588 [2024-11-05 17:53:00.461190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.588 [2024-11-05 17:53:00.461215] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.588 [2024-11-05 17:53:00.461823] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.588 [2024-11-05 17:53:00.461868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.588 [2024-11-05 17:53:00.461879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.588 [2024-11-05 17:53:00.461890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:18:40.588 [2024-11-05 17:53:00.461902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.588 [2024-11-05 17:53:00.461992] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 76f78c8c-6fe9-4417-9c01-875c84cce43d 00:18:40.588 [2024-11-05 17:53:00.463137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.588 [2024-11-05 17:53:00.463173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:40.588 [2024-11-05 17:53:00.463187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:40.588 [2024-11-05 17:53:00.463197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.588 [2024-11-05 17:53:00.468599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.588 [2024-11-05 17:53:00.468632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.588 [2024-11-05 17:53:00.468643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.348 ms 00:18:40.588 [2024-11-05 17:53:00.468656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.588 [2024-11-05 17:53:00.468747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.588 [2024-11-05 17:53:00.468758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.588 [2024-11-05 17:53:00.468766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:40.588 [2024-11-05 17:53:00.468774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.588 [2024-11-05 17:53:00.468827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.588 [2024-11-05 17:53:00.468839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.588 [2024-11-05 17:53:00.468847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:40.588 [2024-11-05 17:53:00.468857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.588 [2024-11-05 17:53:00.468879] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.588 [2024-11-05 17:53:00.470414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.588 [2024-11-05 17:53:00.470441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.588 [2024-11-05 17:53:00.470453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:18:40.588 [2024-11-05 17:53:00.470461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.588 [2024-11-05 17:53:00.470494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.588 [2024-11-05 17:53:00.470503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.588 [2024-11-05 17:53:00.470516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:40.588 [2024-11-05 17:53:00.470524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.588 [2024-11-05 17:53:00.470546] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:40.588 [2024-11-05 17:53:00.470690] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:40.588 [2024-11-05 17:53:00.470705] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.588 [2024-11-05 17:53:00.470717] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:40.588 [2024-11-05 17:53:00.470729] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.588 [2024-11-05 17:53:00.470744] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.588 [2024-11-05 17:53:00.470760] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:40.588 [2024-11-05 17:53:00.470770] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.588 [2024-11-05 17:53:00.470780] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:40.588 [2024-11-05 17:53:00.470788] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:40.588 [2024-11-05 17:53:00.470799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.588 [2024-11-05 17:53:00.470810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.588 [2024-11-05 17:53:00.470828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:18:40.588 [2024-11-05 17:53:00.470837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.588 [2024-11-05 17:53:00.470924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.588 [2024-11-05 17:53:00.470933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.588 [2024-11-05 17:53:00.470946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:40.588 [2024-11-05 17:53:00.470956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.588 [2024-11-05 17:53:00.471053] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.588 [2024-11-05 17:53:00.471083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.588 [2024-11-05 17:53:00.471096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.588 [2024-11-05 17:53:00.471105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.588 [2024-11-05 17:53:00.471115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.588 [2024-11-05 17:53:00.471122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.588 [2024-11-05 17:53:00.471131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:40.588 [2024-11-05 17:53:00.471139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.588 [2024-11-05 17:53:00.471149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:40.588 [2024-11-05 17:53:00.471156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.588 [2024-11-05 17:53:00.471166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.588 [2024-11-05 17:53:00.471173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:40.588 [2024-11-05 17:53:00.471185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.588 [2024-11-05 17:53:00.471193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.588 [2024-11-05 17:53:00.471202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:40.588 [2024-11-05 17:53:00.471209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.588 [2024-11-05 17:53:00.471218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.588 [2024-11-05 17:53:00.471226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:40.589 [2024-11-05 17:53:00.471236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.589 [2024-11-05 17:53:00.471244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.589 [2024-11-05 17:53:00.471253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:40.589 [2024-11-05 17:53:00.471261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.589 [2024-11-05 17:53:00.471270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.589 [2024-11-05 17:53:00.471278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:40.589 [2024-11-05 17:53:00.471287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.589 [2024-11-05 17:53:00.471294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.589 [2024-11-05 17:53:00.471303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:40.589 [2024-11-05 17:53:00.471311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.589 [2024-11-05 17:53:00.471324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.589 [2024-11-05 17:53:00.471331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:40.589 [2024-11-05 17:53:00.471340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.589 [2024-11-05 17:53:00.471348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.589 [2024-11-05 17:53:00.471357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:40.589 [2024-11-05 17:53:00.471365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.589 [2024-11-05 17:53:00.471374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.589 [2024-11-05 17:53:00.471381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:40.589 [2024-11-05 17:53:00.471391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.589 [2024-11-05 17:53:00.471398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:40.589 [2024-11-05 17:53:00.471407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:40.589 [2024-11-05 17:53:00.471415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.589 [2024-11-05 17:53:00.471424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:40.589 [2024-11-05 17:53:00.471431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:40.589 [2024-11-05 17:53:00.471440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.589 [2024-11-05 17:53:00.471447] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.589 [2024-11-05 17:53:00.471459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.589 [2024-11-05 17:53:00.471467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.589 [2024-11-05 17:53:00.471476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.589 [2024-11-05 17:53:00.471485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.589 [2024-11-05 17:53:00.471494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.589 [2024-11-05 17:53:00.471501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.589 [2024-11-05 17:53:00.471511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.589 [2024-11-05 17:53:00.471519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.589 [2024-11-05 17:53:00.471528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.589 [2024-11-05 17:53:00.471539] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.589 [2024-11-05 17:53:00.471554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.589 [2024-11-05 17:53:00.471563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:40.589 [2024-11-05 17:53:00.471573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:40.589 [2024-11-05 17:53:00.471581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:40.589 [2024-11-05 17:53:00.471591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:40.589 [2024-11-05 17:53:00.471599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:40.589 [2024-11-05 17:53:00.471610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:40.589 [2024-11-05 17:53:00.471618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:40.589 [2024-11-05 17:53:00.471627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:40.589 [2024-11-05 17:53:00.471636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:40.589 [2024-11-05 17:53:00.471646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:40.589 [2024-11-05 17:53:00.471654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:40.589 [2024-11-05 17:53:00.471664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:40.589 [2024-11-05 17:53:00.471672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:40.589 [2024-11-05 17:53:00.471682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:40.589 [2024-11-05 17:53:00.471690] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.589 [2024-11-05 17:53:00.471701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.589 [2024-11-05 17:53:00.471710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.589 [2024-11-05 17:53:00.471720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.589 [2024-11-05 17:53:00.471728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.589 [2024-11-05 17:53:00.471738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.589 [2024-11-05 17:53:00.471746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.589 [2024-11-05 17:53:00.471758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.589 [2024-11-05 17:53:00.471766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:18:40.589 [2024-11-05 17:53:00.471780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.589 [2024-11-05 17:53:00.471819] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:40.589 [2024-11-05 17:53:00.471830] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:43.887 [2024-11-05 17:53:03.721766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.887 [2024-11-05 17:53:03.721836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:43.887 [2024-11-05 17:53:03.721850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3249.934 ms 00:18:43.887 [2024-11-05 17:53:03.721861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.887 [2024-11-05 17:53:03.730636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.887 [2024-11-05 17:53:03.730691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.887 [2024-11-05 17:53:03.730706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.694 ms 00:18:43.887 [2024-11-05 17:53:03.730723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.887 [2024-11-05 17:53:03.730831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.887 [2024-11-05 17:53:03.730842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:43.887 [2024-11-05 17:53:03.730851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:43.887 [2024-11-05 17:53:03.730863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.887 [2024-11-05 17:53:03.739832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.887 [2024-11-05 17:53:03.739886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.887 [2024-11-05 17:53:03.739899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.910 ms 00:18:43.887 [2024-11-05 17:53:03.739911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.887 [2024-11-05 17:53:03.739949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.887 [2024-11-05 17:53:03.739959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.888 [2024-11-05 17:53:03.739968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:43.888 [2024-11-05 17:53:03.739977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.888 [2024-11-05 17:53:03.740338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.888 [2024-11-05 17:53:03.740363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.888 [2024-11-05 17:53:03.740376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:18:43.888 [2024-11-05 17:53:03.740392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.888 [2024-11-05 17:53:03.740553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.888 [2024-11-05 17:53:03.740581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.888 [2024-11-05 17:53:03.740595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:43.888 [2024-11-05 17:53:03.740615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.888 [2024-11-05 17:53:03.746226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.888 [2024-11-05 17:53:03.746267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.888 [2024-11-05 17:53:03.746278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.582 ms 00:18:43.888 [2024-11-05 17:53:03.746287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.888 [2024-11-05 17:53:03.754679] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:43.888 [2024-11-05 17:53:03.757557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.888 [2024-11-05 17:53:03.757596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:43.888 [2024-11-05 17:53:03.757616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.177 ms 00:18:43.888 [2024-11-05 17:53:03.757625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.888 [2024-11-05 17:53:03.840982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.888 [2024-11-05 17:53:03.841053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:43.888 [2024-11-05 17:53:03.841086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.309 ms 00:18:43.888 [2024-11-05 17:53:03.841095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.888 [2024-11-05 17:53:03.841282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.888 [2024-11-05 17:53:03.841293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:43.888 [2024-11-05 17:53:03.841304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:18:43.888 [2024-11-05 17:53:03.841312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.888 [2024-11-05 17:53:03.845818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.888 [2024-11-05 17:53:03.845868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:43.888 [2024-11-05 17:53:03.845886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.482 ms 00:18:43.888 [2024-11-05 17:53:03.845895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.888 [2024-11-05 17:53:03.851115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.888 [2024-11-05 17:53:03.851197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:43.888 [2024-11-05 17:53:03.851228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.167 ms 00:18:43.888 [2024-11-05 17:53:03.851246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.888 [2024-11-05 17:53:03.851854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.888 [2024-11-05 17:53:03.851894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.888 [2024-11-05 17:53:03.851921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:18:43.888 [2024-11-05 17:53:03.851939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.148 [2024-11-05 17:53:03.886706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.148 [2024-11-05 17:53:03.886768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:44.148 [2024-11-05 17:53:03.886786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.695 ms 00:18:44.148 [2024-11-05 17:53:03.886795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.148 [2024-11-05 17:53:03.891907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.148 [2024-11-05 17:53:03.891951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:44.148 [2024-11-05 17:53:03.891964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.051 ms 00:18:44.148 [2024-11-05 17:53:03.891972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.148 [2024-11-05 17:53:03.896566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.148 [2024-11-05 17:53:03.896604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:44.148 [2024-11-05 17:53:03.896616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.555 ms 00:18:44.148 [2024-11-05 17:53:03.896623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.148 [2024-11-05 17:53:03.902090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.148 [2024-11-05 17:53:03.902129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:44.148 [2024-11-05 17:53:03.902144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.428 ms 00:18:44.148 [2024-11-05 17:53:03.902153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.148 [2024-11-05 17:53:03.902193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.148 [2024-11-05 17:53:03.902203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:44.148 [2024-11-05 17:53:03.902215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:44.148 [2024-11-05 17:53:03.902223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.148 [2024-11-05 17:53:03.902289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.148 [2024-11-05 17:53:03.902299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:44.148 [2024-11-05 17:53:03.902315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:44.148 [2024-11-05 17:53:03.902326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.148 [2024-11-05 17:53:03.903552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3442.172 ms, result 0 00:18:44.148 { 00:18:44.148 "name": "ftl0", 00:18:44.148 "uuid": "76f78c8c-6fe9-4417-9c01-875c84cce43d" 00:18:44.148 } 00:18:44.148 17:53:03 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:44.148 17:53:03 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:44.148 17:53:04 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:44.148 17:53:04 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:44.409 [2024-11-05 17:53:04.326097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.326158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:44.409 [2024-11-05 17:53:04.326175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:44.409 [2024-11-05 17:53:04.326185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.326209] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.409 [2024-11-05 17:53:04.326679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.326705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:44.409 [2024-11-05 17:53:04.326717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:18:44.409 [2024-11-05 17:53:04.326724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.327000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.327018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:44.409 [2024-11-05 17:53:04.327033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:18:44.409 [2024-11-05 17:53:04.327044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.330291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.330313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:44.409 [2024-11-05 17:53:04.330325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:18:44.409 [2024-11-05 17:53:04.330333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.336503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.336535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:44.409 [2024-11-05 17:53:04.336547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.148 ms 00:18:44.409 [2024-11-05 17:53:04.336558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.339053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.339099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:44.409 [2024-11-05 17:53:04.339111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.403 ms 00:18:44.409 [2024-11-05 17:53:04.339118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.345177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.345222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:44.409 [2024-11-05 17:53:04.345237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.018 ms 00:18:44.409 [2024-11-05 17:53:04.345245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.345368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.345377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:44.409 [2024-11-05 17:53:04.345390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:44.409 [2024-11-05 17:53:04.345400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.348232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.348288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:44.409 [2024-11-05 17:53:04.348303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:18:44.409 [2024-11-05 17:53:04.348312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.351103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.351141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:44.409 [2024-11-05 17:53:04.351153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.745 ms 00:18:44.409 [2024-11-05 17:53:04.351160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.353249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.353282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:44.409 [2024-11-05 17:53:04.353294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:18:44.409 [2024-11-05 17:53:04.353301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.355331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.409 [2024-11-05 17:53:04.355363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:44.409 [2024-11-05 17:53:04.355374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.967 ms 00:18:44.409 [2024-11-05 17:53:04.355381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.409 [2024-11-05 17:53:04.355417] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:44.409 [2024-11-05 17:53:04.355432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:44.409 [2024-11-05 17:53:04.355698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.355996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:44.410 [2024-11-05 17:53:04.356311] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:44.410 [2024-11-05 17:53:04.356322] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76f78c8c-6fe9-4417-9c01-875c84cce43d 00:18:44.410 [2024-11-05 17:53:04.356330] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:44.410 [2024-11-05 17:53:04.356339] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:44.410 [2024-11-05 17:53:04.356346] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:44.410 [2024-11-05 17:53:04.356355] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:44.410 [2024-11-05 17:53:04.356362] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:44.410 [2024-11-05 17:53:04.356374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:44.410 [2024-11-05 17:53:04.356381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:44.410 [2024-11-05 17:53:04.356389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:44.410 [2024-11-05 17:53:04.356396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:44.410 [2024-11-05 17:53:04.356404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.410 [2024-11-05 17:53:04.356412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:44.410 [2024-11-05 17:53:04.356422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:18:44.410 [2024-11-05 17:53:04.356429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.410 [2024-11-05 17:53:04.357963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.410 [2024-11-05 17:53:04.357990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:44.410 [2024-11-05 17:53:04.358001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.512 ms 00:18:44.410 [2024-11-05 17:53:04.358010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.410 [2024-11-05 17:53:04.358105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.410 [2024-11-05 17:53:04.358115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:44.410 [2024-11-05 17:53:04.358127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:44.410 [2024-11-05 17:53:04.358135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.410 [2024-11-05 17:53:04.363558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.410 [2024-11-05 17:53:04.363603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.410 [2024-11-05 17:53:04.363618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.410 [2024-11-05 17:53:04.363625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.363695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.411 [2024-11-05 17:53:04.363702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.411 [2024-11-05 17:53:04.363712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.411 [2024-11-05 17:53:04.363719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.363788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.411 [2024-11-05 17:53:04.363799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.411 [2024-11-05 17:53:04.363808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.411 [2024-11-05 17:53:04.363818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.363836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.411 [2024-11-05 17:53:04.363844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.411 [2024-11-05 17:53:04.363853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.411 [2024-11-05 17:53:04.363860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.373678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.411 [2024-11-05 17:53:04.373728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.411 [2024-11-05 17:53:04.373746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.411 [2024-11-05 17:53:04.373756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.381645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.411 [2024-11-05 17:53:04.381698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.411 [2024-11-05 17:53:04.381710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.411 [2024-11-05 17:53:04.381718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.381791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.411 [2024-11-05 17:53:04.381800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.411 [2024-11-05 17:53:04.381809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.411 [2024-11-05 17:53:04.381816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.381854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.411 [2024-11-05 17:53:04.381862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.411 [2024-11-05 17:53:04.381872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.411 [2024-11-05 17:53:04.381879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.381945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.411 [2024-11-05 17:53:04.381954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.411 [2024-11-05 17:53:04.381963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.411 [2024-11-05 17:53:04.381970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.382002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.411 [2024-11-05 17:53:04.382012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:44.411 [2024-11-05 17:53:04.382026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.411 [2024-11-05 17:53:04.382036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.382147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.411 [2024-11-05 17:53:04.382157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.411 [2024-11-05 17:53:04.382166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.411 [2024-11-05 17:53:04.382174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.382220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.411 [2024-11-05 17:53:04.382236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.411 [2024-11-05 17:53:04.382246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.411 [2024-11-05 17:53:04.382253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-11-05 17:53:04.382379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.260 ms, result 0 00:18:44.411 true 00:18:44.411 17:53:04 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 87189 00:18:44.411 17:53:04 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 87189 ']' 00:18:44.411 17:53:04 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 87189 00:18:44.671 17:53:04 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:18:44.671 17:53:04 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:44.671 17:53:04 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87189 00:18:44.671 killing process with pid 87189 00:18:44.671 17:53:04 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:44.671 17:53:04 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:44.671 17:53:04 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87189' 00:18:44.671 17:53:04 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 87189 00:18:44.671 17:53:04 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 87189 00:18:52.800 17:53:12 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:57.033 262144+0 records in 00:18:57.033 262144+0 records out 00:18:57.033 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.82482 s, 281 MB/s 00:18:57.033 17:53:16 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:58.940 17:53:18 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:58.940 [2024-11-05 17:53:18.678443] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:18:58.940 [2024-11-05 17:53:18.678572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87405 ] 00:18:58.940 [2024-11-05 17:53:18.808403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:58.940 [2024-11-05 17:53:18.838907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.940 [2024-11-05 17:53:18.867385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.202 [2024-11-05 17:53:18.971569] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:59.202 [2024-11-05 17:53:18.971662] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:59.202 [2024-11-05 17:53:19.129544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.202 [2024-11-05 17:53:19.129610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:59.202 [2024-11-05 17:53:19.129630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.202 [2024-11-05 17:53:19.129638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.202 [2024-11-05 17:53:19.129703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.202 [2024-11-05 17:53:19.129714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.202 [2024-11-05 17:53:19.129722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:59.202 [2024-11-05 17:53:19.129729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.202 [2024-11-05 17:53:19.129752] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:59.202 [2024-11-05 17:53:19.130123] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:59.202 [2024-11-05 17:53:19.130160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.202 [2024-11-05 17:53:19.130168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.202 [2024-11-05 17:53:19.130179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:18:59.202 [2024-11-05 17:53:19.130188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.202 [2024-11-05 17:53:19.131432] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:59.202 [2024-11-05 17:53:19.133707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.202 [2024-11-05 17:53:19.133742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:59.202 [2024-11-05 17:53:19.133752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.277 ms 00:18:59.202 [2024-11-05 17:53:19.133768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.202 [2024-11-05 17:53:19.133835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.202 [2024-11-05 17:53:19.133845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:59.202 [2024-11-05 17:53:19.133854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:59.202 [2024-11-05 17:53:19.133862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.202 [2024-11-05 17:53:19.139499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.202 [2024-11-05 17:53:19.139548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.202 [2024-11-05 17:53:19.139570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.566 ms 00:18:59.202 [2024-11-05 17:53:19.139580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.202 [2024-11-05 17:53:19.139694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.202 [2024-11-05 17:53:19.139705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.202 [2024-11-05 17:53:19.139717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:59.202 [2024-11-05 17:53:19.139726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.202 [2024-11-05 17:53:19.139783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.202 [2024-11-05 17:53:19.139794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:59.202 [2024-11-05 17:53:19.139803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:59.202 [2024-11-05 17:53:19.139818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.202 [2024-11-05 17:53:19.139845] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:59.202 [2024-11-05 17:53:19.141296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.202 [2024-11-05 17:53:19.141332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.202 [2024-11-05 17:53:19.141343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.458 ms 00:18:59.202 [2024-11-05 17:53:19.141353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.202 [2024-11-05 17:53:19.141390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.202 [2024-11-05 17:53:19.141400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:59.202 [2024-11-05 17:53:19.141410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:59.202 [2024-11-05 17:53:19.141431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.202 [2024-11-05 17:53:19.141459] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:59.202 [2024-11-05 17:53:19.141479] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:59.202 [2024-11-05 17:53:19.141517] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:59.202 [2024-11-05 17:53:19.141539] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:59.202 [2024-11-05 17:53:19.141650] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:59.202 [2024-11-05 17:53:19.141668] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:59.202 [2024-11-05 17:53:19.141682] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:59.202 [2024-11-05 17:53:19.141693] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:59.202 [2024-11-05 17:53:19.141703] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:59.202 [2024-11-05 17:53:19.141712] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:59.202 [2024-11-05 17:53:19.141720] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:59.202 [2024-11-05 17:53:19.141729] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:59.203 [2024-11-05 17:53:19.141737] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:59.203 [2024-11-05 17:53:19.141745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.203 [2024-11-05 17:53:19.141753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:59.203 [2024-11-05 17:53:19.141761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:59.203 [2024-11-05 17:53:19.141776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.203 [2024-11-05 17:53:19.141861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.203 [2024-11-05 17:53:19.141880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:59.203 [2024-11-05 17:53:19.141888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:59.203 [2024-11-05 17:53:19.141895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.203 [2024-11-05 17:53:19.142001] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:59.203 [2024-11-05 17:53:19.142017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:59.203 [2024-11-05 17:53:19.142027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.203 [2024-11-05 17:53:19.142035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:59.203 [2024-11-05 17:53:19.142052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:59.203 [2024-11-05 17:53:19.142080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:59.203 [2024-11-05 17:53:19.142095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.203 [2024-11-05 17:53:19.142112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:59.203 [2024-11-05 17:53:19.142119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:59.203 [2024-11-05 17:53:19.142126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.203 [2024-11-05 17:53:19.142134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:59.203 [2024-11-05 17:53:19.142141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:59.203 [2024-11-05 17:53:19.142148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:59.203 [2024-11-05 17:53:19.142163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:59.203 [2024-11-05 17:53:19.142170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:59.203 [2024-11-05 17:53:19.142184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.203 [2024-11-05 17:53:19.142201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:59.203 [2024-11-05 17:53:19.142207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.203 [2024-11-05 17:53:19.142221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:59.203 [2024-11-05 17:53:19.142236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.203 [2024-11-05 17:53:19.142251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:59.203 [2024-11-05 17:53:19.142260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.203 [2024-11-05 17:53:19.142274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:59.203 [2024-11-05 17:53:19.142280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.203 [2024-11-05 17:53:19.142293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:59.203 [2024-11-05 17:53:19.142300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:59.203 [2024-11-05 17:53:19.142306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.203 [2024-11-05 17:53:19.142313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:59.203 [2024-11-05 17:53:19.142320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:59.203 [2024-11-05 17:53:19.142326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:59.203 [2024-11-05 17:53:19.142340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:59.203 [2024-11-05 17:53:19.142349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142356] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:59.203 [2024-11-05 17:53:19.142366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:59.203 [2024-11-05 17:53:19.142376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.203 [2024-11-05 17:53:19.142384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.203 [2024-11-05 17:53:19.142392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:59.203 [2024-11-05 17:53:19.142399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:59.203 [2024-11-05 17:53:19.142406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:59.203 [2024-11-05 17:53:19.142413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:59.203 [2024-11-05 17:53:19.142419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:59.203 [2024-11-05 17:53:19.142426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:59.203 [2024-11-05 17:53:19.142434] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:59.203 [2024-11-05 17:53:19.142443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.203 [2024-11-05 17:53:19.142451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:59.203 [2024-11-05 17:53:19.142459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:59.203 [2024-11-05 17:53:19.142466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:59.203 [2024-11-05 17:53:19.142476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:59.203 [2024-11-05 17:53:19.142485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:59.203 [2024-11-05 17:53:19.142492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:59.203 [2024-11-05 17:53:19.142500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:59.203 [2024-11-05 17:53:19.142507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:59.203 [2024-11-05 17:53:19.142514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:59.203 [2024-11-05 17:53:19.142522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:59.203 [2024-11-05 17:53:19.142529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:59.203 [2024-11-05 17:53:19.142536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:59.203 [2024-11-05 17:53:19.142544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:59.203 [2024-11-05 17:53:19.142552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:59.203 [2024-11-05 17:53:19.142559] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:59.203 [2024-11-05 17:53:19.142570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.203 [2024-11-05 17:53:19.142578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:59.203 [2024-11-05 17:53:19.142587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:59.203 [2024-11-05 17:53:19.142595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:59.203 [2024-11-05 17:53:19.142604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:59.203 [2024-11-05 17:53:19.142613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.203 [2024-11-05 17:53:19.142620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:59.203 [2024-11-05 17:53:19.142628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:18:59.203 [2024-11-05 17:53:19.142638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.203 [2024-11-05 17:53:19.152440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.203 [2024-11-05 17:53:19.152488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.203 [2024-11-05 17:53:19.152500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.753 ms 00:18:59.203 [2024-11-05 17:53:19.152509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.203 [2024-11-05 17:53:19.152602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.203 [2024-11-05 17:53:19.152611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:59.203 [2024-11-05 17:53:19.152620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:59.203 [2024-11-05 17:53:19.152627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.203 [2024-11-05 17:53:19.168001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.203 [2024-11-05 17:53:19.168057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.203 [2024-11-05 17:53:19.168081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.305 ms 00:18:59.203 [2024-11-05 17:53:19.168091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.203 [2024-11-05 17:53:19.168151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.204 [2024-11-05 17:53:19.168161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.204 [2024-11-05 17:53:19.168170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:59.204 [2024-11-05 17:53:19.168190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.204 [2024-11-05 17:53:19.168591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.204 [2024-11-05 17:53:19.168619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.204 [2024-11-05 17:53:19.168630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:18:59.204 [2024-11-05 17:53:19.168639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.204 [2024-11-05 17:53:19.168780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.204 [2024-11-05 17:53:19.168793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.204 [2024-11-05 17:53:19.168801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:18:59.204 [2024-11-05 17:53:19.168814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.204 [2024-11-05 17:53:19.174579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.204 [2024-11-05 17:53:19.174627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.204 [2024-11-05 17:53:19.174640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.745 ms 00:18:59.204 [2024-11-05 17:53:19.174650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.204 [2024-11-05 17:53:19.177190] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:59.204 [2024-11-05 17:53:19.177243] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:59.204 [2024-11-05 17:53:19.177257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.204 [2024-11-05 17:53:19.177268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:59.204 [2024-11-05 17:53:19.177280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.496 ms 00:18:59.204 [2024-11-05 17:53:19.177289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.204 [2024-11-05 17:53:19.192883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.204 [2024-11-05 17:53:19.192946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:59.204 [2024-11-05 17:53:19.192959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.528 ms 00:18:59.204 [2024-11-05 17:53:19.192968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.463 [2024-11-05 17:53:19.195477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.463 [2024-11-05 17:53:19.195524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:59.463 [2024-11-05 17:53:19.195534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.432 ms 00:18:59.463 [2024-11-05 17:53:19.195542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.463 [2024-11-05 17:53:19.197002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.463 [2024-11-05 17:53:19.197039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:59.463 [2024-11-05 17:53:19.197049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:18:59.463 [2024-11-05 17:53:19.197057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.463 [2024-11-05 17:53:19.197430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.463 [2024-11-05 17:53:19.197455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:59.463 [2024-11-05 17:53:19.197465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:18:59.463 [2024-11-05 17:53:19.197472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.463 [2024-11-05 17:53:19.215389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.463 [2024-11-05 17:53:19.215464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:59.463 [2024-11-05 17:53:19.215476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.898 ms 00:18:59.463 [2024-11-05 17:53:19.215485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.463 [2024-11-05 17:53:19.223456] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:59.464 [2024-11-05 17:53:19.226623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.464 [2024-11-05 17:53:19.226673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:59.464 [2024-11-05 17:53:19.226692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.073 ms 00:18:59.464 [2024-11-05 17:53:19.226704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.464 [2024-11-05 17:53:19.226784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.464 [2024-11-05 17:53:19.226797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:59.464 [2024-11-05 17:53:19.226806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:59.464 [2024-11-05 17:53:19.226813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.464 [2024-11-05 17:53:19.226919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.464 [2024-11-05 17:53:19.226939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:59.464 [2024-11-05 17:53:19.226947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:59.464 [2024-11-05 17:53:19.226963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.464 [2024-11-05 17:53:19.226986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.464 [2024-11-05 17:53:19.226995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:59.464 [2024-11-05 17:53:19.227010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:59.464 [2024-11-05 17:53:19.227022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.464 [2024-11-05 17:53:19.227052] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:59.464 [2024-11-05 17:53:19.227088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.464 [2024-11-05 17:53:19.227097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:59.464 [2024-11-05 17:53:19.227105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:59.464 [2024-11-05 17:53:19.227114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.464 [2024-11-05 17:53:19.230900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.464 [2024-11-05 17:53:19.230948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:59.464 [2024-11-05 17:53:19.230961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.759 ms 00:18:59.464 [2024-11-05 17:53:19.230971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.464 [2024-11-05 17:53:19.231052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.464 [2024-11-05 17:53:19.231081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:59.464 [2024-11-05 17:53:19.231092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:59.464 [2024-11-05 17:53:19.231104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.464 [2024-11-05 17:53:19.232194] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.245 ms, result 0 00:19:00.398  [2024-11-05T17:53:21.322Z] Copying: 35/1024 [MB] (35 MBps) [2024-11-05T17:53:22.260Z] Copying: 63/1024 [MB] (27 MBps) [2024-11-05T17:53:23.638Z] Copying: 89/1024 [MB] (25 MBps) [2024-11-05T17:53:24.575Z] Copying: 122/1024 [MB] (33 MBps) [2024-11-05T17:53:25.546Z] Copying: 138/1024 [MB] (15 MBps) [2024-11-05T17:53:26.484Z] Copying: 156/1024 [MB] (17 MBps) [2024-11-05T17:53:27.423Z] Copying: 174/1024 [MB] (18 MBps) [2024-11-05T17:53:28.363Z] Copying: 197/1024 [MB] (22 MBps) [2024-11-05T17:53:29.299Z] Copying: 218/1024 [MB] (21 MBps) [2024-11-05T17:53:30.683Z] Copying: 235/1024 [MB] (16 MBps) [2024-11-05T17:53:31.258Z] Copying: 251/1024 [MB] (16 MBps) [2024-11-05T17:53:32.672Z] Copying: 276/1024 [MB] (24 MBps) [2024-11-05T17:53:33.605Z] Copying: 310/1024 [MB] (34 MBps) [2024-11-05T17:53:34.538Z] Copying: 351/1024 [MB] (41 MBps) [2024-11-05T17:53:35.471Z] Copying: 395/1024 [MB] (43 MBps) [2024-11-05T17:53:36.403Z] Copying: 438/1024 [MB] (43 MBps) [2024-11-05T17:53:37.367Z] Copying: 482/1024 [MB] (43 MBps) [2024-11-05T17:53:38.300Z] Copying: 523/1024 [MB] (40 MBps) [2024-11-05T17:53:39.673Z] Copying: 564/1024 [MB] (41 MBps) [2024-11-05T17:53:40.606Z] Copying: 607/1024 [MB] (42 MBps) [2024-11-05T17:53:41.541Z] Copying: 652/1024 [MB] (44 MBps) [2024-11-05T17:53:42.474Z] Copying: 695/1024 [MB] (43 MBps) [2024-11-05T17:53:43.406Z] Copying: 741/1024 [MB] (45 MBps) [2024-11-05T17:53:44.340Z] Copying: 788/1024 [MB] (46 MBps) [2024-11-05T17:53:45.310Z] Copying: 832/1024 [MB] (44 MBps) [2024-11-05T17:53:46.243Z] Copying: 878/1024 [MB] (46 MBps) [2024-11-05T17:53:47.615Z] Copying: 921/1024 [MB] (42 MBps) [2024-11-05T17:53:48.547Z] Copying: 963/1024 [MB] (41 MBps) [2024-11-05T17:53:48.806Z] Copying: 1009/1024 [MB] (46 MBps) [2024-11-05T17:53:48.806Z] Copying: 1024/1024 [MB] (average 34 MBps)[2024-11-05 17:53:48.557729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.815 [2024-11-05 17:53:48.557815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.815 [2024-11-05 17:53:48.557836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:28.815 [2024-11-05 17:53:48.557849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.815 [2024-11-05 17:53:48.557893] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.815 [2024-11-05 17:53:48.558564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.815 [2024-11-05 17:53:48.558602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.815 [2024-11-05 17:53:48.558618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:19:28.815 [2024-11-05 17:53:48.558634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.815 [2024-11-05 17:53:48.560291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.815 [2024-11-05 17:53:48.560335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.815 [2024-11-05 17:53:48.560352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:19:28.815 [2024-11-05 17:53:48.560366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.815 [2024-11-05 17:53:48.588983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.815 [2024-11-05 17:53:48.589094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.815 [2024-11-05 17:53:48.589114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.576 ms 00:19:28.815 [2024-11-05 17:53:48.589148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.815 [2024-11-05 17:53:48.606038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.815 [2024-11-05 17:53:48.606151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.815 [2024-11-05 17:53:48.606172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.817 ms 00:19:28.815 [2024-11-05 17:53:48.606184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.816 [2024-11-05 17:53:48.608183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.816 [2024-11-05 17:53:48.608246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.816 [2024-11-05 17:53:48.608263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:19:28.816 [2024-11-05 17:53:48.608276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.816 [2024-11-05 17:53:48.612159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.816 [2024-11-05 17:53:48.612226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.816 [2024-11-05 17:53:48.612246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.835 ms 00:19:28.816 [2024-11-05 17:53:48.612259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.816 [2024-11-05 17:53:48.612723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.816 [2024-11-05 17:53:48.612757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.816 [2024-11-05 17:53:48.612773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:19:28.816 [2024-11-05 17:53:48.612793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.816 [2024-11-05 17:53:48.614997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.816 [2024-11-05 17:53:48.615055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:28.816 [2024-11-05 17:53:48.615088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:19:28.816 [2024-11-05 17:53:48.615104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.816 [2024-11-05 17:53:48.616645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.816 [2024-11-05 17:53:48.616694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:28.816 [2024-11-05 17:53:48.616709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:19:28.816 [2024-11-05 17:53:48.616725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.816 [2024-11-05 17:53:48.617918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.816 [2024-11-05 17:53:48.617963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.816 [2024-11-05 17:53:48.617978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:19:28.816 [2024-11-05 17:53:48.617989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.816 [2024-11-05 17:53:48.619764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.816 [2024-11-05 17:53:48.619863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.816 [2024-11-05 17:53:48.619892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:19:28.816 [2024-11-05 17:53:48.619909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.816 [2024-11-05 17:53:48.619982] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.816 [2024-11-05 17:53:48.620031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.620987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.816 [2024-11-05 17:53:48.621280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.621988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.622006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.817 [2024-11-05 17:53:48.622045] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.817 [2024-11-05 17:53:48.622089] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76f78c8c-6fe9-4417-9c01-875c84cce43d 00:19:28.817 [2024-11-05 17:53:48.622127] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.817 [2024-11-05 17:53:48.622147] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.817 [2024-11-05 17:53:48.622171] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.817 [2024-11-05 17:53:48.622190] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.817 [2024-11-05 17:53:48.622208] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.817 [2024-11-05 17:53:48.622225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.817 [2024-11-05 17:53:48.622243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.817 [2024-11-05 17:53:48.622258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.817 [2024-11-05 17:53:48.622274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.817 [2024-11-05 17:53:48.622290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.817 [2024-11-05 17:53:48.622308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.817 [2024-11-05 17:53:48.622350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.310 ms 00:19:28.817 [2024-11-05 17:53:48.622368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.624750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.817 [2024-11-05 17:53:48.624800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.817 [2024-11-05 17:53:48.624815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.333 ms 00:19:28.817 [2024-11-05 17:53:48.624826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.624953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.817 [2024-11-05 17:53:48.624977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.817 [2024-11-05 17:53:48.624989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:28.817 [2024-11-05 17:53:48.624999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.631937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.817 [2024-11-05 17:53:48.632008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.817 [2024-11-05 17:53:48.632036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.817 [2024-11-05 17:53:48.632047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.632177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.817 [2024-11-05 17:53:48.632195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.817 [2024-11-05 17:53:48.632210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.817 [2024-11-05 17:53:48.632220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.632349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.817 [2024-11-05 17:53:48.632370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.817 [2024-11-05 17:53:48.632381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.817 [2024-11-05 17:53:48.632391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.632411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.817 [2024-11-05 17:53:48.632422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.817 [2024-11-05 17:53:48.632436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.817 [2024-11-05 17:53:48.632446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.645564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.817 [2024-11-05 17:53:48.645633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.817 [2024-11-05 17:53:48.645647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.817 [2024-11-05 17:53:48.645657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.655503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.817 [2024-11-05 17:53:48.655574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.817 [2024-11-05 17:53:48.655587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.817 [2024-11-05 17:53:48.655595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.655664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.817 [2024-11-05 17:53:48.655680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.817 [2024-11-05 17:53:48.655689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.817 [2024-11-05 17:53:48.655697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.655732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.817 [2024-11-05 17:53:48.655740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.817 [2024-11-05 17:53:48.655749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.817 [2024-11-05 17:53:48.655760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.655842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.817 [2024-11-05 17:53:48.655860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.817 [2024-11-05 17:53:48.655872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.817 [2024-11-05 17:53:48.655883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.817 [2024-11-05 17:53:48.655921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.817 [2024-11-05 17:53:48.655933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.817 [2024-11-05 17:53:48.655944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.817 [2024-11-05 17:53:48.655954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.818 [2024-11-05 17:53:48.656012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.818 [2024-11-05 17:53:48.656027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.818 [2024-11-05 17:53:48.656036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.818 [2024-11-05 17:53:48.656045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.818 [2024-11-05 17:53:48.656187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.818 [2024-11-05 17:53:48.656205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.818 [2024-11-05 17:53:48.656214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.818 [2024-11-05 17:53:48.656224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.818 [2024-11-05 17:53:48.656352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 98.629 ms, result 0 00:19:30.192 00:19:30.192 00:19:30.192 17:53:49 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:30.192 [2024-11-05 17:53:49.915383] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:19:30.192 [2024-11-05 17:53:49.915516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87723 ] 00:19:30.192 [2024-11-05 17:53:50.045323] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:30.192 [2024-11-05 17:53:50.073541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.192 [2024-11-05 17:53:50.100383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.453 [2024-11-05 17:53:50.207846] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.453 [2024-11-05 17:53:50.207926] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.453 [2024-11-05 17:53:50.362690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.362772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:30.453 [2024-11-05 17:53:50.362787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:30.453 [2024-11-05 17:53:50.362797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.362882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.362896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.453 [2024-11-05 17:53:50.362906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:30.453 [2024-11-05 17:53:50.362918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.362945] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:30.453 [2024-11-05 17:53:50.363295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:30.453 [2024-11-05 17:53:50.363330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.363339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.453 [2024-11-05 17:53:50.363354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:19:30.453 [2024-11-05 17:53:50.363366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.365107] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:30.453 [2024-11-05 17:53:50.367826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.367866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:30.453 [2024-11-05 17:53:50.367881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:19:30.453 [2024-11-05 17:53:50.367900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.367968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.367980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:30.453 [2024-11-05 17:53:50.367993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:30.453 [2024-11-05 17:53:50.368002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.374715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.374751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.453 [2024-11-05 17:53:50.374766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.671 ms 00:19:30.453 [2024-11-05 17:53:50.374774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.374901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.374917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.453 [2024-11-05 17:53:50.374929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:30.453 [2024-11-05 17:53:50.374941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.375007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.375024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:30.453 [2024-11-05 17:53:50.375033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:30.453 [2024-11-05 17:53:50.375044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.375086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.453 [2024-11-05 17:53:50.376768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.376798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.453 [2024-11-05 17:53:50.376808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:19:30.453 [2024-11-05 17:53:50.376824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.376859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.376869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:30.453 [2024-11-05 17:53:50.376878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:30.453 [2024-11-05 17:53:50.376890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.376925] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:30.453 [2024-11-05 17:53:50.376950] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:30.453 [2024-11-05 17:53:50.376989] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:30.453 [2024-11-05 17:53:50.377010] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:30.453 [2024-11-05 17:53:50.377137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:30.453 [2024-11-05 17:53:50.377155] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:30.453 [2024-11-05 17:53:50.377169] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:30.453 [2024-11-05 17:53:50.377183] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:30.453 [2024-11-05 17:53:50.377197] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:30.453 [2024-11-05 17:53:50.377206] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:30.453 [2024-11-05 17:53:50.377214] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:30.453 [2024-11-05 17:53:50.377222] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:30.453 [2024-11-05 17:53:50.377229] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:30.453 [2024-11-05 17:53:50.377237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.377248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:30.453 [2024-11-05 17:53:50.377258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:19:30.453 [2024-11-05 17:53:50.377265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.377352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.453 [2024-11-05 17:53:50.377370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:30.453 [2024-11-05 17:53:50.377381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:30.453 [2024-11-05 17:53:50.377389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.453 [2024-11-05 17:53:50.377493] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:30.453 [2024-11-05 17:53:50.377514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:30.453 [2024-11-05 17:53:50.377527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.453 [2024-11-05 17:53:50.377536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.453 [2024-11-05 17:53:50.377546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:30.454 [2024-11-05 17:53:50.377553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:30.454 [2024-11-05 17:53:50.377567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:30.454 [2024-11-05 17:53:50.377581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.454 [2024-11-05 17:53:50.377598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:30.454 [2024-11-05 17:53:50.377605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:30.454 [2024-11-05 17:53:50.377612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:30.454 [2024-11-05 17:53:50.377618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:30.454 [2024-11-05 17:53:50.377626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:30.454 [2024-11-05 17:53:50.377633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:30.454 [2024-11-05 17:53:50.377648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:30.454 [2024-11-05 17:53:50.377655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:30.454 [2024-11-05 17:53:50.377671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.454 [2024-11-05 17:53:50.377684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:30.454 [2024-11-05 17:53:50.377691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.454 [2024-11-05 17:53:50.377705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:30.454 [2024-11-05 17:53:50.377717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.454 [2024-11-05 17:53:50.377730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:30.454 [2024-11-05 17:53:50.377737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:30.454 [2024-11-05 17:53:50.377750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:30.454 [2024-11-05 17:53:50.377757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.454 [2024-11-05 17:53:50.377769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:30.454 [2024-11-05 17:53:50.377776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:30.454 [2024-11-05 17:53:50.377782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:30.454 [2024-11-05 17:53:50.377788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:30.454 [2024-11-05 17:53:50.377795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:30.454 [2024-11-05 17:53:50.377802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:30.454 [2024-11-05 17:53:50.377816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:30.454 [2024-11-05 17:53:50.377824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377830] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:30.454 [2024-11-05 17:53:50.377840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:30.454 [2024-11-05 17:53:50.377848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:30.454 [2024-11-05 17:53:50.377855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:30.454 [2024-11-05 17:53:50.377864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:30.454 [2024-11-05 17:53:50.377870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:30.454 [2024-11-05 17:53:50.377877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:30.454 [2024-11-05 17:53:50.377885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:30.454 [2024-11-05 17:53:50.377892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:30.454 [2024-11-05 17:53:50.377899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:30.454 [2024-11-05 17:53:50.377908] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:30.454 [2024-11-05 17:53:50.377918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.454 [2024-11-05 17:53:50.377930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:30.454 [2024-11-05 17:53:50.377937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:30.454 [2024-11-05 17:53:50.377944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:30.454 [2024-11-05 17:53:50.377953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:30.454 [2024-11-05 17:53:50.377962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:30.454 [2024-11-05 17:53:50.377969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:30.454 [2024-11-05 17:53:50.377976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:30.454 [2024-11-05 17:53:50.377982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:30.454 [2024-11-05 17:53:50.377989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:30.454 [2024-11-05 17:53:50.377997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:30.454 [2024-11-05 17:53:50.378004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:30.454 [2024-11-05 17:53:50.378011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:30.454 [2024-11-05 17:53:50.378019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:30.454 [2024-11-05 17:53:50.378026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:30.454 [2024-11-05 17:53:50.378033] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:30.454 [2024-11-05 17:53:50.378045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:30.454 [2024-11-05 17:53:50.378054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:30.454 [2024-11-05 17:53:50.378077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:30.454 [2024-11-05 17:53:50.378086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:30.455 [2024-11-05 17:53:50.378095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:30.455 [2024-11-05 17:53:50.378103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.455 [2024-11-05 17:53:50.378111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:30.455 [2024-11-05 17:53:50.378118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:19:30.455 [2024-11-05 17:53:50.378128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.455 [2024-11-05 17:53:50.390235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.455 [2024-11-05 17:53:50.390281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.455 [2024-11-05 17:53:50.390294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.057 ms 00:19:30.455 [2024-11-05 17:53:50.390304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.455 [2024-11-05 17:53:50.390408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.455 [2024-11-05 17:53:50.390422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.455 [2024-11-05 17:53:50.390432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:30.455 [2024-11-05 17:53:50.390444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.455 [2024-11-05 17:53:50.409139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.455 [2024-11-05 17:53:50.409200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.455 [2024-11-05 17:53:50.409214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.619 ms 00:19:30.455 [2024-11-05 17:53:50.409223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.455 [2024-11-05 17:53:50.409299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.455 [2024-11-05 17:53:50.409313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.455 [2024-11-05 17:53:50.409349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:30.455 [2024-11-05 17:53:50.409362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.455 [2024-11-05 17:53:50.409843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.455 [2024-11-05 17:53:50.409876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.455 [2024-11-05 17:53:50.409887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:19:30.455 [2024-11-05 17:53:50.409895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.455 [2024-11-05 17:53:50.410086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.455 [2024-11-05 17:53:50.410108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.455 [2024-11-05 17:53:50.410120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:19:30.455 [2024-11-05 17:53:50.410131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.455 [2024-11-05 17:53:50.417514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.455 [2024-11-05 17:53:50.417557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.455 [2024-11-05 17:53:50.417570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.357 ms 00:19:30.455 [2024-11-05 17:53:50.417581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.455 [2024-11-05 17:53:50.420976] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:30.455 [2024-11-05 17:53:50.421020] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:30.455 [2024-11-05 17:53:50.421040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.455 [2024-11-05 17:53:50.421052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:30.455 [2024-11-05 17:53:50.421080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.334 ms 00:19:30.455 [2024-11-05 17:53:50.421092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.445175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.445275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:30.714 [2024-11-05 17:53:50.445292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.911 ms 00:19:30.714 [2024-11-05 17:53:50.445423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.450265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.450379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:30.714 [2024-11-05 17:53:50.450413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.435 ms 00:19:30.714 [2024-11-05 17:53:50.450437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.453128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.453209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:30.714 [2024-11-05 17:53:50.453232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.390 ms 00:19:30.714 [2024-11-05 17:53:50.453265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.454001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.454045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.714 [2024-11-05 17:53:50.454085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:19:30.714 [2024-11-05 17:53:50.454103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.475330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.475398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:30.714 [2024-11-05 17:53:50.475412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.181 ms 00:19:30.714 [2024-11-05 17:53:50.475433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.483428] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:30.714 [2024-11-05 17:53:50.486931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.486972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.714 [2024-11-05 17:53:50.486991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.440 ms 00:19:30.714 [2024-11-05 17:53:50.487001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.487127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.487140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:30.714 [2024-11-05 17:53:50.487149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:30.714 [2024-11-05 17:53:50.487158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.487238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.487256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.714 [2024-11-05 17:53:50.487268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:30.714 [2024-11-05 17:53:50.487276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.487298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.487306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.714 [2024-11-05 17:53:50.487315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:30.714 [2024-11-05 17:53:50.487325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.487362] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:30.714 [2024-11-05 17:53:50.487378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.487386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:30.714 [2024-11-05 17:53:50.487397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:30.714 [2024-11-05 17:53:50.487406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.491217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.491253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.714 [2024-11-05 17:53:50.491271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.793 ms 00:19:30.714 [2024-11-05 17:53:50.491280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.491361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.714 [2024-11-05 17:53:50.491372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.714 [2024-11-05 17:53:50.491383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:30.714 [2024-11-05 17:53:50.491395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.714 [2024-11-05 17:53:50.492425] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.302 ms, result 0 00:19:32.087  [2024-11-05T17:53:53.011Z] Copying: 45/1024 [MB] (45 MBps) [2024-11-05T17:53:53.944Z] Copying: 88/1024 [MB] (43 MBps) [2024-11-05T17:53:54.917Z] Copying: 132/1024 [MB] (44 MBps) [2024-11-05T17:53:55.852Z] Copying: 172/1024 [MB] (39 MBps) [2024-11-05T17:53:56.788Z] Copying: 211/1024 [MB] (39 MBps) [2024-11-05T17:53:57.767Z] Copying: 248/1024 [MB] (36 MBps) [2024-11-05T17:53:58.699Z] Copying: 291/1024 [MB] (43 MBps) [2024-11-05T17:54:00.074Z] Copying: 329/1024 [MB] (37 MBps) [2024-11-05T17:54:01.010Z] Copying: 362/1024 [MB] (33 MBps) [2024-11-05T17:54:01.943Z] Copying: 401/1024 [MB] (39 MBps) [2024-11-05T17:54:02.875Z] Copying: 436/1024 [MB] (35 MBps) [2024-11-05T17:54:03.810Z] Copying: 461/1024 [MB] (24 MBps) [2024-11-05T17:54:04.754Z] Copying: 494/1024 [MB] (33 MBps) [2024-11-05T17:54:05.688Z] Copying: 520/1024 [MB] (26 MBps) [2024-11-05T17:54:07.064Z] Copying: 543/1024 [MB] (22 MBps) [2024-11-05T17:54:08.000Z] Copying: 570/1024 [MB] (27 MBps) [2024-11-05T17:54:08.943Z] Copying: 598/1024 [MB] (27 MBps) [2024-11-05T17:54:09.881Z] Copying: 614/1024 [MB] (15 MBps) [2024-11-05T17:54:10.821Z] Copying: 628/1024 [MB] (14 MBps) [2024-11-05T17:54:11.763Z] Copying: 640/1024 [MB] (12 MBps) [2024-11-05T17:54:12.712Z] Copying: 665736/1048576 [kB] (9472 kBps) [2024-11-05T17:54:14.096Z] Copying: 664/1024 [MB] (13 MBps) [2024-11-05T17:54:14.669Z] Copying: 674/1024 [MB] (10 MBps) [2024-11-05T17:54:16.054Z] Copying: 699668/1048576 [kB] (9152 kBps) [2024-11-05T17:54:16.997Z] Copying: 709880/1048576 [kB] (10212 kBps) [2024-11-05T17:54:17.940Z] Copying: 719916/1048576 [kB] (10036 kBps) [2024-11-05T17:54:18.888Z] Copying: 729800/1048576 [kB] (9884 kBps) [2024-11-05T17:54:19.833Z] Copying: 739448/1048576 [kB] (9648 kBps) [2024-11-05T17:54:20.778Z] Copying: 732/1024 [MB] (10 MBps) [2024-11-05T17:54:21.723Z] Copying: 759984/1048576 [kB] (10168 kBps) [2024-11-05T17:54:23.111Z] Copying: 769744/1048576 [kB] (9760 kBps) [2024-11-05T17:54:23.684Z] Copying: 763/1024 [MB] (11 MBps) [2024-11-05T17:54:25.073Z] Copying: 790624/1048576 [kB] (9128 kBps) [2024-11-05T17:54:26.017Z] Copying: 799764/1048576 [kB] (9140 kBps) [2024-11-05T17:54:27.016Z] Copying: 809820/1048576 [kB] (10056 kBps) [2024-11-05T17:54:27.959Z] Copying: 801/1024 [MB] (10 MBps) [2024-11-05T17:54:28.902Z] Copying: 830788/1048576 [kB] (10000 kBps) [2024-11-05T17:54:29.845Z] Copying: 822/1024 [MB] (11 MBps) [2024-11-05T17:54:30.788Z] Copying: 852120/1048576 [kB] (9996 kBps) [2024-11-05T17:54:31.728Z] Copying: 842/1024 [MB] (10 MBps) [2024-11-05T17:54:32.669Z] Copying: 852/1024 [MB] (10 MBps) [2024-11-05T17:54:34.051Z] Copying: 883592/1048576 [kB] (10216 kBps) [2024-11-05T17:54:34.999Z] Copying: 874/1024 [MB] (11 MBps) [2024-11-05T17:54:35.942Z] Copying: 886/1024 [MB] (11 MBps) [2024-11-05T17:54:36.887Z] Copying: 917336/1048576 [kB] (9880 kBps) [2024-11-05T17:54:37.826Z] Copying: 927184/1048576 [kB] (9848 kBps) [2024-11-05T17:54:38.766Z] Copying: 917/1024 [MB] (12 MBps) [2024-11-05T17:54:39.707Z] Copying: 929/1024 [MB] (12 MBps) [2024-11-05T17:54:40.702Z] Copying: 940/1024 [MB] (10 MBps) [2024-11-05T17:54:42.089Z] Copying: 972528/1048576 [kB] (9364 kBps) [2024-11-05T17:54:42.700Z] Copying: 981952/1048576 [kB] (9424 kBps) [2024-11-05T17:54:44.086Z] Copying: 992048/1048576 [kB] (10096 kBps) [2024-11-05T17:54:45.030Z] Copying: 1002100/1048576 [kB] (10052 kBps) [2024-11-05T17:54:45.972Z] Copying: 989/1024 [MB] (10 MBps) [2024-11-05T17:54:46.916Z] Copying: 1022544/1048576 [kB] (9756 kBps) [2024-11-05T17:54:47.880Z] Copying: 1032144/1048576 [kB] (9600 kBps) [2024-11-05T17:54:48.451Z] Copying: 1041840/1048576 [kB] (9696 kBps) [2024-11-05T17:54:48.451Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-05 17:54:48.424177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.460 [2024-11-05 17:54:48.424270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:28.460 [2024-11-05 17:54:48.424296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:28.460 [2024-11-05 17:54:48.424318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.460 [2024-11-05 17:54:48.424357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:28.460 [2024-11-05 17:54:48.424945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.460 [2024-11-05 17:54:48.424981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:28.460 [2024-11-05 17:54:48.424996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:20:28.460 [2024-11-05 17:54:48.425011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.460 [2024-11-05 17:54:48.425439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.460 [2024-11-05 17:54:48.425468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:28.460 [2024-11-05 17:54:48.425485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:20:28.460 [2024-11-05 17:54:48.425506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.460 [2024-11-05 17:54:48.432402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.460 [2024-11-05 17:54:48.432447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:28.460 [2024-11-05 17:54:48.432465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.869 ms 00:20:28.460 [2024-11-05 17:54:48.432480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.460 [2024-11-05 17:54:48.441039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.460 [2024-11-05 17:54:48.441079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:28.460 [2024-11-05 17:54:48.441089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.531 ms 00:20:28.460 [2024-11-05 17:54:48.441105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.460 [2024-11-05 17:54:48.443837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.460 [2024-11-05 17:54:48.443871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:28.460 [2024-11-05 17:54:48.443880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:20:28.460 [2024-11-05 17:54:48.443887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.460 [2024-11-05 17:54:48.447013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.460 [2024-11-05 17:54:48.447045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:28.460 [2024-11-05 17:54:48.447055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.098 ms 00:20:28.460 [2024-11-05 17:54:48.447062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.461 [2024-11-05 17:54:48.447182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.461 [2024-11-05 17:54:48.447191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:28.461 [2024-11-05 17:54:48.447200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:28.461 [2024-11-05 17:54:48.447207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.461 [2024-11-05 17:54:48.450152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.461 [2024-11-05 17:54:48.450180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:28.461 [2024-11-05 17:54:48.450188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.926 ms 00:20:28.461 [2024-11-05 17:54:48.450204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.461 [2024-11-05 17:54:48.452342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.461 [2024-11-05 17:54:48.452370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:28.461 [2024-11-05 17:54:48.452379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:20:28.461 [2024-11-05 17:54:48.452385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.723 [2024-11-05 17:54:48.454547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.723 [2024-11-05 17:54:48.454580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:28.723 [2024-11-05 17:54:48.454589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:20:28.723 [2024-11-05 17:54:48.454597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.723 [2024-11-05 17:54:48.456242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.723 [2024-11-05 17:54:48.456273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:28.723 [2024-11-05 17:54:48.456283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:20:28.723 [2024-11-05 17:54:48.456290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.723 [2024-11-05 17:54:48.456317] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:28.723 [2024-11-05 17:54:48.456332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:28.723 [2024-11-05 17:54:48.456659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.456993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:28.724 [2024-11-05 17:54:48.457145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:28.724 [2024-11-05 17:54:48.457162] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76f78c8c-6fe9-4417-9c01-875c84cce43d 00:20:28.724 [2024-11-05 17:54:48.457169] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:28.724 [2024-11-05 17:54:48.457177] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:28.724 [2024-11-05 17:54:48.457184] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:28.724 [2024-11-05 17:54:48.457191] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:28.724 [2024-11-05 17:54:48.457198] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:28.724 [2024-11-05 17:54:48.457205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:28.724 [2024-11-05 17:54:48.457212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:28.724 [2024-11-05 17:54:48.457219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:28.724 [2024-11-05 17:54:48.457225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:28.724 [2024-11-05 17:54:48.457232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.724 [2024-11-05 17:54:48.457244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:28.724 [2024-11-05 17:54:48.457252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:20:28.724 [2024-11-05 17:54:48.457265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-11-05 17:54:48.458660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.724 [2024-11-05 17:54:48.458684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:28.724 [2024-11-05 17:54:48.458693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:20:28.724 [2024-11-05 17:54:48.458700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-11-05 17:54:48.458777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.724 [2024-11-05 17:54:48.458785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:28.724 [2024-11-05 17:54:48.458797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:28.724 [2024-11-05 17:54:48.458805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-11-05 17:54:48.463516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.724 [2024-11-05 17:54:48.463547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.724 [2024-11-05 17:54:48.463555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.724 [2024-11-05 17:54:48.463563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-11-05 17:54:48.463613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.724 [2024-11-05 17:54:48.463620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.724 [2024-11-05 17:54:48.463628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.724 [2024-11-05 17:54:48.463634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-11-05 17:54:48.463688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.724 [2024-11-05 17:54:48.463697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.724 [2024-11-05 17:54:48.463704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.724 [2024-11-05 17:54:48.463711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-11-05 17:54:48.463726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.724 [2024-11-05 17:54:48.463736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.725 [2024-11-05 17:54:48.463744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.725 [2024-11-05 17:54:48.463751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.725 [2024-11-05 17:54:48.472220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.725 [2024-11-05 17:54:48.472259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.725 [2024-11-05 17:54:48.472269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.725 [2024-11-05 17:54:48.472276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.725 [2024-11-05 17:54:48.479033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.725 [2024-11-05 17:54:48.479082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.725 [2024-11-05 17:54:48.479093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.725 [2024-11-05 17:54:48.479109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.725 [2024-11-05 17:54:48.479136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.725 [2024-11-05 17:54:48.479144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.725 [2024-11-05 17:54:48.479153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.725 [2024-11-05 17:54:48.479161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.725 [2024-11-05 17:54:48.479201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.725 [2024-11-05 17:54:48.479211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.725 [2024-11-05 17:54:48.479223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.725 [2024-11-05 17:54:48.479235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.725 [2024-11-05 17:54:48.479296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.725 [2024-11-05 17:54:48.479307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.725 [2024-11-05 17:54:48.479316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.725 [2024-11-05 17:54:48.479323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.725 [2024-11-05 17:54:48.479354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.725 [2024-11-05 17:54:48.479364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:28.725 [2024-11-05 17:54:48.479372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.725 [2024-11-05 17:54:48.479383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.725 [2024-11-05 17:54:48.479421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.725 [2024-11-05 17:54:48.479433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.725 [2024-11-05 17:54:48.479441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.725 [2024-11-05 17:54:48.479450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.725 [2024-11-05 17:54:48.479487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.725 [2024-11-05 17:54:48.479498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.725 [2024-11-05 17:54:48.479509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.725 [2024-11-05 17:54:48.479517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.725 [2024-11-05 17:54:48.479629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.455 ms, result 0 00:20:28.725 00:20:28.725 00:20:28.725 17:54:48 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:31.272 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:31.272 17:54:50 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:31.272 [2024-11-05 17:54:50.871705] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:20:31.272 [2024-11-05 17:54:50.871835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88347 ] 00:20:31.272 [2024-11-05 17:54:51.012410] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:31.272 [2024-11-05 17:54:51.040349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.272 [2024-11-05 17:54:51.060112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.272 [2024-11-05 17:54:51.148831] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.272 [2024-11-05 17:54:51.148887] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.535 [2024-11-05 17:54:51.303557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.303610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:31.535 [2024-11-05 17:54:51.303623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:31.535 [2024-11-05 17:54:51.303631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.303680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.303693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.535 [2024-11-05 17:54:51.303705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:31.535 [2024-11-05 17:54:51.303712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.303733] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:31.535 [2024-11-05 17:54:51.303964] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:31.535 [2024-11-05 17:54:51.303979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.303987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.535 [2024-11-05 17:54:51.303997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:20:31.535 [2024-11-05 17:54:51.304005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.305124] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:31.535 [2024-11-05 17:54:51.307339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.307370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:31.535 [2024-11-05 17:54:51.307380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.217 ms 00:20:31.535 [2024-11-05 17:54:51.307403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.307463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.307478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:31.535 [2024-11-05 17:54:51.307486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:31.535 [2024-11-05 17:54:51.307497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.312302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.312329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.535 [2024-11-05 17:54:51.312342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.749 ms 00:20:31.535 [2024-11-05 17:54:51.312355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.312440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.312459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.535 [2024-11-05 17:54:51.312468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:31.535 [2024-11-05 17:54:51.312478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.312518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.312528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:31.535 [2024-11-05 17:54:51.312536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:31.535 [2024-11-05 17:54:51.312547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.312568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:31.535 [2024-11-05 17:54:51.313905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.313931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.535 [2024-11-05 17:54:51.313947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:20:31.535 [2024-11-05 17:54:51.313955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.313983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.313993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:31.535 [2024-11-05 17:54:51.314002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:31.535 [2024-11-05 17:54:51.314012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.314034] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:31.535 [2024-11-05 17:54:51.314053] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:31.535 [2024-11-05 17:54:51.314101] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:31.535 [2024-11-05 17:54:51.314124] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:31.535 [2024-11-05 17:54:51.314226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:31.535 [2024-11-05 17:54:51.314243] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:31.535 [2024-11-05 17:54:51.314258] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:31.535 [2024-11-05 17:54:51.314269] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:31.535 [2024-11-05 17:54:51.314281] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:31.535 [2024-11-05 17:54:51.314296] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:31.535 [2024-11-05 17:54:51.314304] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:31.535 [2024-11-05 17:54:51.314312] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:31.535 [2024-11-05 17:54:51.314321] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:31.535 [2024-11-05 17:54:51.314330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.314341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:31.535 [2024-11-05 17:54:51.314353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:20:31.535 [2024-11-05 17:54:51.314363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.314449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.535 [2024-11-05 17:54:51.314457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:31.535 [2024-11-05 17:54:51.314464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:31.535 [2024-11-05 17:54:51.314472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.535 [2024-11-05 17:54:51.314571] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:31.535 [2024-11-05 17:54:51.314581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:31.535 [2024-11-05 17:54:51.314590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.535 [2024-11-05 17:54:51.314597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.535 [2024-11-05 17:54:51.314604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:31.535 [2024-11-05 17:54:51.314611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:31.535 [2024-11-05 17:54:51.314619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:31.535 [2024-11-05 17:54:51.314626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:31.535 [2024-11-05 17:54:51.314638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:31.535 [2024-11-05 17:54:51.314645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.535 [2024-11-05 17:54:51.314657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:31.535 [2024-11-05 17:54:51.314664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:31.535 [2024-11-05 17:54:51.314670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.535 [2024-11-05 17:54:51.314677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:31.535 [2024-11-05 17:54:51.314684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:31.535 [2024-11-05 17:54:51.314691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.535 [2024-11-05 17:54:51.314697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:31.535 [2024-11-05 17:54:51.314705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:31.535 [2024-11-05 17:54:51.314711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.535 [2024-11-05 17:54:51.314718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:31.535 [2024-11-05 17:54:51.314726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:31.535 [2024-11-05 17:54:51.314732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.535 [2024-11-05 17:54:51.314740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:31.535 [2024-11-05 17:54:51.314747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:31.535 [2024-11-05 17:54:51.314754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.535 [2024-11-05 17:54:51.314761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:31.535 [2024-11-05 17:54:51.314772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:31.535 [2024-11-05 17:54:51.314778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.535 [2024-11-05 17:54:51.314785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:31.535 [2024-11-05 17:54:51.314791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:31.535 [2024-11-05 17:54:51.314798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.535 [2024-11-05 17:54:51.314804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:31.536 [2024-11-05 17:54:51.314811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:31.536 [2024-11-05 17:54:51.314818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.536 [2024-11-05 17:54:51.314832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:31.536 [2024-11-05 17:54:51.314839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:31.536 [2024-11-05 17:54:51.314845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.536 [2024-11-05 17:54:51.314852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:31.536 [2024-11-05 17:54:51.314859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:31.536 [2024-11-05 17:54:51.314866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.536 [2024-11-05 17:54:51.314872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:31.536 [2024-11-05 17:54:51.314879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:31.536 [2024-11-05 17:54:51.314888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.536 [2024-11-05 17:54:51.314894] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:31.536 [2024-11-05 17:54:51.314904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:31.536 [2024-11-05 17:54:51.314910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.536 [2024-11-05 17:54:51.314922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.536 [2024-11-05 17:54:51.314932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:31.536 [2024-11-05 17:54:51.314939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:31.536 [2024-11-05 17:54:51.314945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:31.536 [2024-11-05 17:54:51.314953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:31.536 [2024-11-05 17:54:51.314960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:31.536 [2024-11-05 17:54:51.314966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:31.536 [2024-11-05 17:54:51.314975] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:31.536 [2024-11-05 17:54:51.314985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.536 [2024-11-05 17:54:51.314993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:31.536 [2024-11-05 17:54:51.315001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:31.536 [2024-11-05 17:54:51.315008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:31.536 [2024-11-05 17:54:51.315017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:31.536 [2024-11-05 17:54:51.315024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:31.536 [2024-11-05 17:54:51.315031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:31.536 [2024-11-05 17:54:51.315038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:31.536 [2024-11-05 17:54:51.315046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:31.536 [2024-11-05 17:54:51.315052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:31.536 [2024-11-05 17:54:51.315059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:31.536 [2024-11-05 17:54:51.315078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:31.536 [2024-11-05 17:54:51.315085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:31.536 [2024-11-05 17:54:51.315093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:31.536 [2024-11-05 17:54:51.315100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:31.536 [2024-11-05 17:54:51.315107] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:31.536 [2024-11-05 17:54:51.315116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.536 [2024-11-05 17:54:51.315123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:31.536 [2024-11-05 17:54:51.315130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:31.536 [2024-11-05 17:54:51.315137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:31.536 [2024-11-05 17:54:51.315146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:31.536 [2024-11-05 17:54:51.315154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.315161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:31.536 [2024-11-05 17:54:51.315168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:20:31.536 [2024-11-05 17:54:51.315178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.323799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.323827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.536 [2024-11-05 17:54:51.323837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.575 ms 00:20:31.536 [2024-11-05 17:54:51.323846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.323925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.323933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:31.536 [2024-11-05 17:54:51.323941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:31.536 [2024-11-05 17:54:51.323949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.339749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.339786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.536 [2024-11-05 17:54:51.339798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.752 ms 00:20:31.536 [2024-11-05 17:54:51.339808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.339847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.339858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.536 [2024-11-05 17:54:51.339868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.536 [2024-11-05 17:54:51.339880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.340242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.340267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.536 [2024-11-05 17:54:51.340278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:20:31.536 [2024-11-05 17:54:51.340287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.340421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.340436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.536 [2024-11-05 17:54:51.340444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:31.536 [2024-11-05 17:54:51.340451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.345779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.345807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.536 [2024-11-05 17:54:51.345825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.307 ms 00:20:31.536 [2024-11-05 17:54:51.345835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.348437] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:31.536 [2024-11-05 17:54:51.348471] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:31.536 [2024-11-05 17:54:51.348486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.348497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:31.536 [2024-11-05 17:54:51.348506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:20:31.536 [2024-11-05 17:54:51.348516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.363838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.363893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:31.536 [2024-11-05 17:54:51.363906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.217 ms 00:20:31.536 [2024-11-05 17:54:51.363916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.365948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.365976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:31.536 [2024-11-05 17:54:51.365985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.975 ms 00:20:31.536 [2024-11-05 17:54:51.365992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.367539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.367565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:31.536 [2024-11-05 17:54:51.367574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:20:31.536 [2024-11-05 17:54:51.367581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.367902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.536 [2024-11-05 17:54:51.367920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:31.536 [2024-11-05 17:54:51.367930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:20:31.536 [2024-11-05 17:54:51.367937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.536 [2024-11-05 17:54:51.385337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.537 [2024-11-05 17:54:51.385389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:31.537 [2024-11-05 17:54:51.385401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.372 ms 00:20:31.537 [2024-11-05 17:54:51.385410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.537 [2024-11-05 17:54:51.393043] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:31.537 [2024-11-05 17:54:51.395892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.537 [2024-11-05 17:54:51.395926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:31.537 [2024-11-05 17:54:51.395939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.438 ms 00:20:31.537 [2024-11-05 17:54:51.395948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.537 [2024-11-05 17:54:51.396019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.537 [2024-11-05 17:54:51.396031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:31.537 [2024-11-05 17:54:51.396041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:31.537 [2024-11-05 17:54:51.396055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.537 [2024-11-05 17:54:51.396156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.537 [2024-11-05 17:54:51.396168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:31.537 [2024-11-05 17:54:51.396184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:31.537 [2024-11-05 17:54:51.396193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.537 [2024-11-05 17:54:51.396215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.537 [2024-11-05 17:54:51.396226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:31.537 [2024-11-05 17:54:51.396235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:31.537 [2024-11-05 17:54:51.396244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.537 [2024-11-05 17:54:51.396272] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:31.537 [2024-11-05 17:54:51.396284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.537 [2024-11-05 17:54:51.396295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:31.537 [2024-11-05 17:54:51.396307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:31.537 [2024-11-05 17:54:51.396316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.537 [2024-11-05 17:54:51.399800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.537 [2024-11-05 17:54:51.399832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:31.537 [2024-11-05 17:54:51.399849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.463 ms 00:20:31.537 [2024-11-05 17:54:51.399857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.537 [2024-11-05 17:54:51.399988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.537 [2024-11-05 17:54:51.400012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:31.537 [2024-11-05 17:54:51.400022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:31.537 [2024-11-05 17:54:51.400033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.537 [2024-11-05 17:54:51.400986] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.048 ms, result 0 00:20:32.478  [2024-11-05T17:54:53.411Z] Copying: 10180/1048576 [kB] (10180 kBps) [2024-11-05T17:54:54.798Z] Copying: 22/1024 [MB] (12 MBps) [2024-11-05T17:54:55.741Z] Copying: 33/1024 [MB] (10 MBps) [2024-11-05T17:54:56.737Z] Copying: 43784/1048576 [kB] (9912 kBps) [2024-11-05T17:54:57.678Z] Copying: 53/1024 [MB] (10 MBps) [2024-11-05T17:54:58.627Z] Copying: 63/1024 [MB] (10 MBps) [2024-11-05T17:54:59.588Z] Copying: 75048/1048576 [kB] (10060 kBps) [2024-11-05T17:55:00.537Z] Copying: 85072/1048576 [kB] (10024 kBps) [2024-11-05T17:55:01.502Z] Copying: 93/1024 [MB] (10 MBps) [2024-11-05T17:55:02.447Z] Copying: 105728/1048576 [kB] (10108 kBps) [2024-11-05T17:55:03.829Z] Copying: 113/1024 [MB] (10 MBps) [2024-11-05T17:55:04.421Z] Copying: 123/1024 [MB] (10 MBps) [2024-11-05T17:55:05.857Z] Copying: 136984/1048576 [kB] (10156 kBps) [2024-11-05T17:55:06.430Z] Copying: 144/1024 [MB] (10 MBps) [2024-11-05T17:55:07.816Z] Copying: 154/1024 [MB] (10 MBps) [2024-11-05T17:55:08.754Z] Copying: 165/1024 [MB] (10 MBps) [2024-11-05T17:55:09.693Z] Copying: 175/1024 [MB] (10 MBps) [2024-11-05T17:55:10.640Z] Copying: 189488/1048576 [kB] (9996 kBps) [2024-11-05T17:55:11.598Z] Copying: 199496/1048576 [kB] (10008 kBps) [2024-11-05T17:55:12.541Z] Copying: 205/1024 [MB] (10 MBps) [2024-11-05T17:55:13.486Z] Copying: 220148/1048576 [kB] (9676 kBps) [2024-11-05T17:55:14.427Z] Copying: 230352/1048576 [kB] (10204 kBps) [2024-11-05T17:55:15.812Z] Copying: 239704/1048576 [kB] (9352 kBps) [2024-11-05T17:55:16.756Z] Copying: 249352/1048576 [kB] (9648 kBps) [2024-11-05T17:55:17.699Z] Copying: 253/1024 [MB] (10 MBps) [2024-11-05T17:55:18.640Z] Copying: 268356/1048576 [kB] (8692 kBps) [2024-11-05T17:55:19.647Z] Copying: 278424/1048576 [kB] (10068 kBps) [2024-11-05T17:55:20.588Z] Copying: 288200/1048576 [kB] (9776 kBps) [2024-11-05T17:55:21.532Z] Copying: 296880/1048576 [kB] (8680 kBps) [2024-11-05T17:55:22.477Z] Copying: 305168/1048576 [kB] (8288 kBps) [2024-11-05T17:55:23.438Z] Copying: 314448/1048576 [kB] (9280 kBps) [2024-11-05T17:55:24.828Z] Copying: 323432/1048576 [kB] (8984 kBps) [2024-11-05T17:55:25.771Z] Copying: 332704/1048576 [kB] (9272 kBps) [2024-11-05T17:55:26.712Z] Copying: 342336/1048576 [kB] (9632 kBps) [2024-11-05T17:55:27.653Z] Copying: 351412/1048576 [kB] (9076 kBps) [2024-11-05T17:55:28.594Z] Copying: 361128/1048576 [kB] (9716 kBps) [2024-11-05T17:55:29.532Z] Copying: 371112/1048576 [kB] (9984 kBps) [2024-11-05T17:55:30.475Z] Copying: 380436/1048576 [kB] (9324 kBps) [2024-11-05T17:55:31.420Z] Copying: 390508/1048576 [kB] (10072 kBps) [2024-11-05T17:55:32.828Z] Copying: 394/1024 [MB] (12 MBps) [2024-11-05T17:55:33.772Z] Copying: 407/1024 [MB] (13 MBps) [2024-11-05T17:55:34.716Z] Copying: 418/1024 [MB] (10 MBps) [2024-11-05T17:55:35.657Z] Copying: 428/1024 [MB] (10 MBps) [2024-11-05T17:55:36.601Z] Copying: 448752/1048576 [kB] (10136 kBps) [2024-11-05T17:55:37.544Z] Copying: 458976/1048576 [kB] (10224 kBps) [2024-11-05T17:55:38.487Z] Copying: 458/1024 [MB] (10 MBps) [2024-11-05T17:55:39.430Z] Copying: 468/1024 [MB] (10 MBps) [2024-11-05T17:55:40.838Z] Copying: 479/1024 [MB] (10 MBps) [2024-11-05T17:55:41.780Z] Copying: 489/1024 [MB] (10 MBps) [2024-11-05T17:55:42.719Z] Copying: 511148/1048576 [kB] (9924 kBps) [2024-11-05T17:55:43.670Z] Copying: 512/1024 [MB] (13 MBps) [2024-11-05T17:55:44.614Z] Copying: 529/1024 [MB] (16 MBps) [2024-11-05T17:55:45.553Z] Copying: 552048/1048576 [kB] (9996 kBps) [2024-11-05T17:55:46.495Z] Copying: 550/1024 [MB] (11 MBps) [2024-11-05T17:55:47.472Z] Copying: 561/1024 [MB] (10 MBps) [2024-11-05T17:55:48.415Z] Copying: 571/1024 [MB] (10 MBps) [2024-11-05T17:55:49.803Z] Copying: 582/1024 [MB] (10 MBps) [2024-11-05T17:55:50.743Z] Copying: 592/1024 [MB] (10 MBps) [2024-11-05T17:55:51.714Z] Copying: 604/1024 [MB] (12 MBps) [2024-11-05T17:55:52.657Z] Copying: 629376/1048576 [kB] (9880 kBps) [2024-11-05T17:55:53.600Z] Copying: 638776/1048576 [kB] (9400 kBps) [2024-11-05T17:55:54.544Z] Copying: 634/1024 [MB] (10 MBps) [2024-11-05T17:55:55.489Z] Copying: 658364/1048576 [kB] (9012 kBps) [2024-11-05T17:55:56.430Z] Copying: 667304/1048576 [kB] (8940 kBps) [2024-11-05T17:55:57.810Z] Copying: 677064/1048576 [kB] (9760 kBps) [2024-11-05T17:55:58.749Z] Copying: 687196/1048576 [kB] (10132 kBps) [2024-11-05T17:55:59.690Z] Copying: 685/1024 [MB] (14 MBps) [2024-11-05T17:56:00.632Z] Copying: 699/1024 [MB] (13 MBps) [2024-11-05T17:56:01.572Z] Copying: 712/1024 [MB] (13 MBps) [2024-11-05T17:56:02.513Z] Copying: 724/1024 [MB] (12 MBps) [2024-11-05T17:56:03.454Z] Copying: 750/1024 [MB] (25 MBps) [2024-11-05T17:56:04.835Z] Copying: 773/1024 [MB] (22 MBps) [2024-11-05T17:56:05.772Z] Copying: 800/1024 [MB] (27 MBps) [2024-11-05T17:56:06.708Z] Copying: 827/1024 [MB] (26 MBps) [2024-11-05T17:56:07.644Z] Copying: 846/1024 [MB] (18 MBps) [2024-11-05T17:56:08.583Z] Copying: 865/1024 [MB] (19 MBps) [2024-11-05T17:56:09.527Z] Copying: 881/1024 [MB] (15 MBps) [2024-11-05T17:56:10.463Z] Copying: 894/1024 [MB] (12 MBps) [2024-11-05T17:56:11.444Z] Copying: 909/1024 [MB] (14 MBps) [2024-11-05T17:56:12.823Z] Copying: 926/1024 [MB] (17 MBps) [2024-11-05T17:56:13.758Z] Copying: 939/1024 [MB] (12 MBps) [2024-11-05T17:56:14.733Z] Copying: 950/1024 [MB] (11 MBps) [2024-11-05T17:56:15.667Z] Copying: 962/1024 [MB] (11 MBps) [2024-11-05T17:56:16.604Z] Copying: 973/1024 [MB] (11 MBps) [2024-11-05T17:56:17.579Z] Copying: 983/1024 [MB] (10 MBps) [2024-11-05T17:56:18.520Z] Copying: 993/1024 [MB] (10 MBps) [2024-11-05T17:56:19.465Z] Copying: 1004/1024 [MB] (10 MBps) [2024-11-05T17:56:20.854Z] Copying: 1014/1024 [MB] (10 MBps) [2024-11-05T17:56:21.427Z] Copying: 1047760/1048576 [kB] (8912 kBps) [2024-11-05T17:56:21.427Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-05 17:56:21.204845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-11-05 17:56:21.204900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:01.436 [2024-11-05 17:56:21.204914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:01.436 [2024-11-05 17:56:21.204923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-11-05 17:56:21.205546] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:01.436 [2024-11-05 17:56:21.209287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-11-05 17:56:21.209324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:01.436 [2024-11-05 17:56:21.209341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:22:01.436 [2024-11-05 17:56:21.209349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-11-05 17:56:21.221853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-11-05 17:56:21.221887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:01.436 [2024-11-05 17:56:21.221898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.987 ms 00:22:01.436 [2024-11-05 17:56:21.221906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-11-05 17:56:21.248708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-11-05 17:56:21.248752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:01.436 [2024-11-05 17:56:21.248764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.785 ms 00:22:01.436 [2024-11-05 17:56:21.248772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-11-05 17:56:21.254963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-11-05 17:56:21.254995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:01.436 [2024-11-05 17:56:21.255005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.156 ms 00:22:01.436 [2024-11-05 17:56:21.255013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-11-05 17:56:21.257758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-11-05 17:56:21.257791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:01.436 [2024-11-05 17:56:21.257800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:22:01.436 [2024-11-05 17:56:21.257808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-11-05 17:56:21.261746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-11-05 17:56:21.261778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:01.436 [2024-11-05 17:56:21.261788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.909 ms 00:22:01.436 [2024-11-05 17:56:21.261801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.698 [2024-11-05 17:56:21.571168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.698 [2024-11-05 17:56:21.571232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:01.698 [2024-11-05 17:56:21.571246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 309.332 ms 00:22:01.698 [2024-11-05 17:56:21.571254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.698 [2024-11-05 17:56:21.574212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.698 [2024-11-05 17:56:21.574246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:01.698 [2024-11-05 17:56:21.574257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.943 ms 00:22:01.698 [2024-11-05 17:56:21.574277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.698 [2024-11-05 17:56:21.576360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.698 [2024-11-05 17:56:21.576391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:01.698 [2024-11-05 17:56:21.576401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:22:01.698 [2024-11-05 17:56:21.576409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.698 [2024-11-05 17:56:21.578216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.698 [2024-11-05 17:56:21.578249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:01.698 [2024-11-05 17:56:21.578258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.779 ms 00:22:01.698 [2024-11-05 17:56:21.578266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.698 [2024-11-05 17:56:21.579965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.698 [2024-11-05 17:56:21.579997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:01.698 [2024-11-05 17:56:21.580007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:22:01.698 [2024-11-05 17:56:21.580015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.698 [2024-11-05 17:56:21.580041] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:01.698 [2024-11-05 17:56:21.580057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102656 / 261120 wr_cnt: 1 state: open 00:22:01.698 [2024-11-05 17:56:21.580079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:01.698 [2024-11-05 17:56:21.580505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:01.699 [2024-11-05 17:56:21.580940] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:01.699 [2024-11-05 17:56:21.580949] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76f78c8c-6fe9-4417-9c01-875c84cce43d 00:22:01.699 [2024-11-05 17:56:21.580958] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102656 00:22:01.699 [2024-11-05 17:56:21.580971] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103616 00:22:01.699 [2024-11-05 17:56:21.580982] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102656 00:22:01.699 [2024-11-05 17:56:21.580991] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:22:01.699 [2024-11-05 17:56:21.581000] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:01.699 [2024-11-05 17:56:21.581014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:01.699 [2024-11-05 17:56:21.581025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:01.699 [2024-11-05 17:56:21.581033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:01.699 [2024-11-05 17:56:21.581040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:01.699 [2024-11-05 17:56:21.581048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.699 [2024-11-05 17:56:21.581057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:01.699 [2024-11-05 17:56:21.581074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:22:01.699 [2024-11-05 17:56:21.581088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.699 [2024-11-05 17:56:21.582472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.699 [2024-11-05 17:56:21.582497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:01.699 [2024-11-05 17:56:21.582506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:22:01.699 [2024-11-05 17:56:21.582513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.699 [2024-11-05 17:56:21.582589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.699 [2024-11-05 17:56:21.582597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:01.699 [2024-11-05 17:56:21.582611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:01.699 [2024-11-05 17:56:21.582619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.699 [2024-11-05 17:56:21.587549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.699 [2024-11-05 17:56:21.587579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:01.699 [2024-11-05 17:56:21.587588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.699 [2024-11-05 17:56:21.587596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.699 [2024-11-05 17:56:21.587649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.699 [2024-11-05 17:56:21.587658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:01.699 [2024-11-05 17:56:21.587665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.699 [2024-11-05 17:56:21.587673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.699 [2024-11-05 17:56:21.587735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.699 [2024-11-05 17:56:21.587745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:01.699 [2024-11-05 17:56:21.587753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.699 [2024-11-05 17:56:21.587760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.699 [2024-11-05 17:56:21.587775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.699 [2024-11-05 17:56:21.587783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:01.699 [2024-11-05 17:56:21.587791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.699 [2024-11-05 17:56:21.587798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.699 [2024-11-05 17:56:21.596721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.699 [2024-11-05 17:56:21.596771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:01.699 [2024-11-05 17:56:21.596781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.699 [2024-11-05 17:56:21.596789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.699 [2024-11-05 17:56:21.604051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.699 [2024-11-05 17:56:21.604110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:01.699 [2024-11-05 17:56:21.604121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.699 [2024-11-05 17:56:21.604130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.699 [2024-11-05 17:56:21.604165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.699 [2024-11-05 17:56:21.604175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:01.699 [2024-11-05 17:56:21.604187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.699 [2024-11-05 17:56:21.604195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.700 [2024-11-05 17:56:21.604239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.700 [2024-11-05 17:56:21.604249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:01.700 [2024-11-05 17:56:21.604256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.700 [2024-11-05 17:56:21.604263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.700 [2024-11-05 17:56:21.604324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.700 [2024-11-05 17:56:21.604337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:01.700 [2024-11-05 17:56:21.604345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.700 [2024-11-05 17:56:21.604352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.700 [2024-11-05 17:56:21.604382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.700 [2024-11-05 17:56:21.604394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:01.700 [2024-11-05 17:56:21.604402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.700 [2024-11-05 17:56:21.604412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.700 [2024-11-05 17:56:21.604448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.700 [2024-11-05 17:56:21.604459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:01.700 [2024-11-05 17:56:21.604466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.700 [2024-11-05 17:56:21.604474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.700 [2024-11-05 17:56:21.604512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.700 [2024-11-05 17:56:21.604521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:01.700 [2024-11-05 17:56:21.604529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.700 [2024-11-05 17:56:21.604536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.700 [2024-11-05 17:56:21.604652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.760 ms, result 0 00:22:02.640 00:22:02.640 00:22:02.640 17:56:22 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:02.640 [2024-11-05 17:56:22.435925] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:22:02.640 [2024-11-05 17:56:22.436083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89296 ] 00:22:02.640 [2024-11-05 17:56:22.568711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.640 [2024-11-05 17:56:22.598427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.640 [2024-11-05 17:56:22.618595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.899 [2024-11-05 17:56:22.708827] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.899 [2024-11-05 17:56:22.708898] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.899 [2024-11-05 17:56:22.866124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.866177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.899 [2024-11-05 17:56:22.866190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:02.899 [2024-11-05 17:56:22.866198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.866245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.866257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.899 [2024-11-05 17:56:22.866266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:02.899 [2024-11-05 17:56:22.866273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.866297] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.899 [2024-11-05 17:56:22.866528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.899 [2024-11-05 17:56:22.866542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.866549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.899 [2024-11-05 17:56:22.866559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:22:02.899 [2024-11-05 17:56:22.866566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.867622] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:02.899 [2024-11-05 17:56:22.869709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.869751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:02.899 [2024-11-05 17:56:22.869761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:22:02.899 [2024-11-05 17:56:22.869778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.869894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.869923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:02.899 [2024-11-05 17:56:22.869934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:02.899 [2024-11-05 17:56:22.869941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.874844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.874877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.899 [2024-11-05 17:56:22.874895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.842 ms 00:22:02.899 [2024-11-05 17:56:22.874903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.874983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.874993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.899 [2024-11-05 17:56:22.875004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:02.899 [2024-11-05 17:56:22.875011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.875052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.875061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.899 [2024-11-05 17:56:22.875082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:02.899 [2024-11-05 17:56:22.875092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.875113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.899 [2024-11-05 17:56:22.876443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.876472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.899 [2024-11-05 17:56:22.876486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 00:22:02.899 [2024-11-05 17:56:22.876494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.876526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.876535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.899 [2024-11-05 17:56:22.876543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:02.899 [2024-11-05 17:56:22.876553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.876573] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:02.899 [2024-11-05 17:56:22.876590] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:02.899 [2024-11-05 17:56:22.876625] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:02.899 [2024-11-05 17:56:22.876645] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:02.899 [2024-11-05 17:56:22.876749] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.899 [2024-11-05 17:56:22.876769] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.899 [2024-11-05 17:56:22.876781] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:02.899 [2024-11-05 17:56:22.876791] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.899 [2024-11-05 17:56:22.876800] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.899 [2024-11-05 17:56:22.876808] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.899 [2024-11-05 17:56:22.876818] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.899 [2024-11-05 17:56:22.876825] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.899 [2024-11-05 17:56:22.876835] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.899 [2024-11-05 17:56:22.876843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.876850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.899 [2024-11-05 17:56:22.876857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:22:02.899 [2024-11-05 17:56:22.876863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.876948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.899 [2024-11-05 17:56:22.876961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.899 [2024-11-05 17:56:22.876969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:02.899 [2024-11-05 17:56:22.876976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.899 [2024-11-05 17:56:22.877083] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.899 [2024-11-05 17:56:22.877099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.899 [2024-11-05 17:56:22.877108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.899 [2024-11-05 17:56:22.877115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.899 [2024-11-05 17:56:22.877127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.899 [2024-11-05 17:56:22.877135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.899 [2024-11-05 17:56:22.877141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.899 [2024-11-05 17:56:22.877148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.899 [2024-11-05 17:56:22.877160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.899 [2024-11-05 17:56:22.877167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.899 [2024-11-05 17:56:22.877173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.899 [2024-11-05 17:56:22.877179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.899 [2024-11-05 17:56:22.877185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.899 [2024-11-05 17:56:22.877192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.899 [2024-11-05 17:56:22.877198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:02.899 [2024-11-05 17:56:22.877205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.899 [2024-11-05 17:56:22.877212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.899 [2024-11-05 17:56:22.877218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:02.899 [2024-11-05 17:56:22.877225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.899 [2024-11-05 17:56:22.877231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.899 [2024-11-05 17:56:22.877240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.899 [2024-11-05 17:56:22.877247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.899 [2024-11-05 17:56:22.877253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.899 [2024-11-05 17:56:22.877260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.899 [2024-11-05 17:56:22.877266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.899 [2024-11-05 17:56:22.877273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.899 [2024-11-05 17:56:22.877279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.899 [2024-11-05 17:56:22.877285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.899 [2024-11-05 17:56:22.877292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.899 [2024-11-05 17:56:22.877298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:02.899 [2024-11-05 17:56:22.877304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.899 [2024-11-05 17:56:22.877312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.899 [2024-11-05 17:56:22.877320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:02.899 [2024-11-05 17:56:22.877326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.899 [2024-11-05 17:56:22.877333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.899 [2024-11-05 17:56:22.877339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:02.899 [2024-11-05 17:56:22.877348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.899 [2024-11-05 17:56:22.877355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.900 [2024-11-05 17:56:22.877362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:02.900 [2024-11-05 17:56:22.877368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.900 [2024-11-05 17:56:22.877375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.900 [2024-11-05 17:56:22.877381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:02.900 [2024-11-05 17:56:22.877388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.900 [2024-11-05 17:56:22.877394] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.900 [2024-11-05 17:56:22.877406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.900 [2024-11-05 17:56:22.877413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.900 [2024-11-05 17:56:22.877420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.900 [2024-11-05 17:56:22.877427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.900 [2024-11-05 17:56:22.877434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.900 [2024-11-05 17:56:22.877441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.900 [2024-11-05 17:56:22.877448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.900 [2024-11-05 17:56:22.877454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.900 [2024-11-05 17:56:22.877463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.900 [2024-11-05 17:56:22.877471] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.900 [2024-11-05 17:56:22.877481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.900 [2024-11-05 17:56:22.877489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.900 [2024-11-05 17:56:22.877496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:02.900 [2024-11-05 17:56:22.877503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:02.900 [2024-11-05 17:56:22.877510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:02.900 [2024-11-05 17:56:22.877517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:02.900 [2024-11-05 17:56:22.877524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:02.900 [2024-11-05 17:56:22.877531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:02.900 [2024-11-05 17:56:22.877538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:02.900 [2024-11-05 17:56:22.877545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:02.900 [2024-11-05 17:56:22.877552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:02.900 [2024-11-05 17:56:22.877559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:02.900 [2024-11-05 17:56:22.877566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:02.900 [2024-11-05 17:56:22.877573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:02.900 [2024-11-05 17:56:22.877582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:02.900 [2024-11-05 17:56:22.877589] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.900 [2024-11-05 17:56:22.877597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.900 [2024-11-05 17:56:22.877608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.900 [2024-11-05 17:56:22.877615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.900 [2024-11-05 17:56:22.877622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.900 [2024-11-05 17:56:22.877629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.900 [2024-11-05 17:56:22.877637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.900 [2024-11-05 17:56:22.877645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.900 [2024-11-05 17:56:22.877652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:22:02.900 [2024-11-05 17:56:22.877665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.900 [2024-11-05 17:56:22.886576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.900 [2024-11-05 17:56:22.886609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.900 [2024-11-05 17:56:22.886618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.872 ms 00:22:02.900 [2024-11-05 17:56:22.886626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.900 [2024-11-05 17:56:22.886703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.900 [2024-11-05 17:56:22.886711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:02.900 [2024-11-05 17:56:22.886727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:02.900 [2024-11-05 17:56:22.886734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.903647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.903693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:03.158 [2024-11-05 17:56:22.903705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.865 ms 00:22:03.158 [2024-11-05 17:56:22.903717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.903755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.903764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.158 [2024-11-05 17:56:22.903772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:03.158 [2024-11-05 17:56:22.903779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.904156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.904180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.158 [2024-11-05 17:56:22.904190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:22:03.158 [2024-11-05 17:56:22.904199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.904330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.904349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.158 [2024-11-05 17:56:22.904358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:22:03.158 [2024-11-05 17:56:22.904365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.909674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.909711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.158 [2024-11-05 17:56:22.909727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.285 ms 00:22:03.158 [2024-11-05 17:56:22.909739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.912352] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:03.158 [2024-11-05 17:56:22.912391] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:03.158 [2024-11-05 17:56:22.912406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.912415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:03.158 [2024-11-05 17:56:22.912425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:22:03.158 [2024-11-05 17:56:22.912433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.927923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.927960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:03.158 [2024-11-05 17:56:22.927971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.392 ms 00:22:03.158 [2024-11-05 17:56:22.927978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.929567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.929599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:03.158 [2024-11-05 17:56:22.929608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:22:03.158 [2024-11-05 17:56:22.929619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.930983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.931014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:03.158 [2024-11-05 17:56:22.931022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:22:03.158 [2024-11-05 17:56:22.931029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.931352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.931370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:03.158 [2024-11-05 17:56:22.931379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:22:03.158 [2024-11-05 17:56:22.931386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.946850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.946899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:03.158 [2024-11-05 17:56:22.946910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.442 ms 00:22:03.158 [2024-11-05 17:56:22.946923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.954308] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:03.158 [2024-11-05 17:56:22.956670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.956706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:03.158 [2024-11-05 17:56:22.956718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.708 ms 00:22:03.158 [2024-11-05 17:56:22.956731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.956781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.956791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:03.158 [2024-11-05 17:56:22.956801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:03.158 [2024-11-05 17:56:22.956809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.958205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.958239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:03.158 [2024-11-05 17:56:22.958250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:22:03.158 [2024-11-05 17:56:22.958258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.958281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.958289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:03.158 [2024-11-05 17:56:22.958297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:03.158 [2024-11-05 17:56:22.958304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.958361] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:03.158 [2024-11-05 17:56:22.958372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.958379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:03.158 [2024-11-05 17:56:22.958389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:03.158 [2024-11-05 17:56:22.958396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.961446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.961480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:03.158 [2024-11-05 17:56:22.961491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:22:03.158 [2024-11-05 17:56:22.961506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.961577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.158 [2024-11-05 17:56:22.961587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:03.158 [2024-11-05 17:56:22.961599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:03.158 [2024-11-05 17:56:22.961608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.158 [2024-11-05 17:56:22.962481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.957 ms, result 0 00:22:04.533  [2024-11-05T17:56:25.474Z] Copying: 38/1024 [MB] (38 MBps) [2024-11-05T17:56:26.410Z] Copying: 77/1024 [MB] (39 MBps) [2024-11-05T17:56:27.351Z] Copying: 100/1024 [MB] (23 MBps) [2024-11-05T17:56:28.291Z] Copying: 123/1024 [MB] (22 MBps) [2024-11-05T17:56:29.247Z] Copying: 145/1024 [MB] (21 MBps) [2024-11-05T17:56:30.206Z] Copying: 161/1024 [MB] (16 MBps) [2024-11-05T17:56:31.149Z] Copying: 177/1024 [MB] (15 MBps) [2024-11-05T17:56:32.533Z] Copying: 193/1024 [MB] (16 MBps) [2024-11-05T17:56:33.475Z] Copying: 211/1024 [MB] (17 MBps) [2024-11-05T17:56:34.412Z] Copying: 233/1024 [MB] (21 MBps) [2024-11-05T17:56:35.351Z] Copying: 256/1024 [MB] (23 MBps) [2024-11-05T17:56:36.283Z] Copying: 278/1024 [MB] (21 MBps) [2024-11-05T17:56:37.218Z] Copying: 310/1024 [MB] (32 MBps) [2024-11-05T17:56:38.151Z] Copying: 356/1024 [MB] (46 MBps) [2024-11-05T17:56:39.530Z] Copying: 395/1024 [MB] (38 MBps) [2024-11-05T17:56:40.505Z] Copying: 412/1024 [MB] (17 MBps) [2024-11-05T17:56:41.441Z] Copying: 425/1024 [MB] (13 MBps) [2024-11-05T17:56:42.376Z] Copying: 442/1024 [MB] (16 MBps) [2024-11-05T17:56:43.312Z] Copying: 467/1024 [MB] (25 MBps) [2024-11-05T17:56:44.249Z] Copying: 483/1024 [MB] (15 MBps) [2024-11-05T17:56:45.190Z] Copying: 494/1024 [MB] (11 MBps) [2024-11-05T17:56:46.191Z] Copying: 506/1024 [MB] (11 MBps) [2024-11-05T17:56:47.567Z] Copying: 520/1024 [MB] (14 MBps) [2024-11-05T17:56:48.504Z] Copying: 541/1024 [MB] (20 MBps) [2024-11-05T17:56:49.440Z] Copying: 558/1024 [MB] (17 MBps) [2024-11-05T17:56:50.378Z] Copying: 583/1024 [MB] (24 MBps) [2024-11-05T17:56:51.316Z] Copying: 612/1024 [MB] (29 MBps) [2024-11-05T17:56:52.249Z] Copying: 629/1024 [MB] (16 MBps) [2024-11-05T17:56:53.184Z] Copying: 666/1024 [MB] (37 MBps) [2024-11-05T17:56:54.557Z] Copying: 706/1024 [MB] (39 MBps) [2024-11-05T17:56:55.534Z] Copying: 750/1024 [MB] (44 MBps) [2024-11-05T17:56:56.468Z] Copying: 798/1024 [MB] (48 MBps) [2024-11-05T17:56:57.399Z] Copying: 844/1024 [MB] (45 MBps) [2024-11-05T17:56:58.333Z] Copying: 889/1024 [MB] (45 MBps) [2024-11-05T17:56:59.265Z] Copying: 936/1024 [MB] (46 MBps) [2024-11-05T17:57:00.218Z] Copying: 980/1024 [MB] (44 MBps) [2024-11-05T17:57:00.218Z] Copying: 1023/1024 [MB] (42 MBps) [2024-11-05T17:57:00.478Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-05 17:57:00.364304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.364390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:40.487 [2024-11-05 17:57:00.364410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:40.487 [2024-11-05 17:57:00.364421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.364449] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:40.487 [2024-11-05 17:57:00.365079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.365108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:40.487 [2024-11-05 17:57:00.365119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:22:40.487 [2024-11-05 17:57:00.365136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.365417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.365437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:40.487 [2024-11-05 17:57:00.365448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:22:40.487 [2024-11-05 17:57:00.365458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.372782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.372823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:40.487 [2024-11-05 17:57:00.372836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.299 ms 00:22:40.487 [2024-11-05 17:57:00.372847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.379645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.379676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:40.487 [2024-11-05 17:57:00.379687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.756 ms 00:22:40.487 [2024-11-05 17:57:00.379696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.380955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.380989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:40.487 [2024-11-05 17:57:00.380998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:22:40.487 [2024-11-05 17:57:00.381012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.384829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.384895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:40.487 [2024-11-05 17:57:00.384907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.781 ms 00:22:40.487 [2024-11-05 17:57:00.384924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.438613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.438667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:40.487 [2024-11-05 17:57:00.438686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.652 ms 00:22:40.487 [2024-11-05 17:57:00.438693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.440329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.440359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:40.487 [2024-11-05 17:57:00.440367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:22:40.487 [2024-11-05 17:57:00.440373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.441477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.441505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:40.487 [2024-11-05 17:57:00.441512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:22:40.487 [2024-11-05 17:57:00.441518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.442313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.442342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:40.487 [2024-11-05 17:57:00.442350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:22:40.487 [2024-11-05 17:57:00.442355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.443161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.487 [2024-11-05 17:57:00.443189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:40.487 [2024-11-05 17:57:00.443197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:22:40.487 [2024-11-05 17:57:00.443203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.487 [2024-11-05 17:57:00.443226] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:40.487 [2024-11-05 17:57:00.443238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:40.487 [2024-11-05 17:57:00.443247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:40.487 [2024-11-05 17:57:00.443254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:40.487 [2024-11-05 17:57:00.443260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:40.487 [2024-11-05 17:57:00.443267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:40.487 [2024-11-05 17:57:00.443273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:40.487 [2024-11-05 17:57:00.443279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:40.487 [2024-11-05 17:57:00.443286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:40.488 [2024-11-05 17:57:00.443847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:40.489 [2024-11-05 17:57:00.443853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:40.489 [2024-11-05 17:57:00.443858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:40.489 [2024-11-05 17:57:00.443871] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:40.489 [2024-11-05 17:57:00.443877] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 76f78c8c-6fe9-4417-9c01-875c84cce43d 00:22:40.489 [2024-11-05 17:57:00.443890] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:40.489 [2024-11-05 17:57:00.443897] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 29376 00:22:40.489 [2024-11-05 17:57:00.443910] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 28416 00:22:40.489 [2024-11-05 17:57:00.443917] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0338 00:22:40.489 [2024-11-05 17:57:00.443922] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:40.489 [2024-11-05 17:57:00.443929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:40.489 [2024-11-05 17:57:00.443938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:40.489 [2024-11-05 17:57:00.443943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:40.489 [2024-11-05 17:57:00.443949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:40.489 [2024-11-05 17:57:00.443955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.489 [2024-11-05 17:57:00.443962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:40.489 [2024-11-05 17:57:00.443968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:22:40.489 [2024-11-05 17:57:00.443974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.445301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.489 [2024-11-05 17:57:00.445323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:40.489 [2024-11-05 17:57:00.445332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:22:40.489 [2024-11-05 17:57:00.445338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.445416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.489 [2024-11-05 17:57:00.445424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:40.489 [2024-11-05 17:57:00.445431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:40.489 [2024-11-05 17:57:00.445437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.449765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.449791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:40.489 [2024-11-05 17:57:00.449799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.449805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.449846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.449853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:40.489 [2024-11-05 17:57:00.449859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.449866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.449911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.449919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:40.489 [2024-11-05 17:57:00.449926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.449932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.449943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.449949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:40.489 [2024-11-05 17:57:00.449955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.449960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.458119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.458157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:40.489 [2024-11-05 17:57:00.458165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.458171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.464530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.464566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:40.489 [2024-11-05 17:57:00.464574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.464580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.464607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.464615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:40.489 [2024-11-05 17:57:00.464621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.464627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.464698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.464708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:40.489 [2024-11-05 17:57:00.464714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.464721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.464772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.464790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:40.489 [2024-11-05 17:57:00.464797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.464803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.464825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.464837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:40.489 [2024-11-05 17:57:00.464844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.464850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.464878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.464889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:40.489 [2024-11-05 17:57:00.464895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.464902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.464938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.489 [2024-11-05 17:57:00.464947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:40.489 [2024-11-05 17:57:00.464953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.489 [2024-11-05 17:57:00.464960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.489 [2024-11-05 17:57:00.465054] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 100.750 ms, result 0 00:22:40.747 00:22:40.747 00:22:40.747 17:57:00 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:43.275 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 87189 00:22:43.275 17:57:02 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 87189 ']' 00:22:43.275 17:57:02 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 87189 00:22:43.275 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (87189) - No such process 00:22:43.275 Process with pid 87189 is not found 00:22:43.275 17:57:02 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 87189 is not found' 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:43.275 Remove shared memory files 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:43.275 17:57:02 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:43.275 00:22:43.275 real 4m6.434s 00:22:43.275 user 3m55.749s 00:22:43.275 sys 0m11.292s 00:22:43.275 17:57:02 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:43.275 17:57:02 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:43.275 ************************************ 00:22:43.275 END TEST ftl_restore 00:22:43.275 ************************************ 00:22:43.275 17:57:02 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:43.275 17:57:02 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:43.275 17:57:02 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:43.275 17:57:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:43.275 ************************************ 00:22:43.275 START TEST ftl_dirty_shutdown 00:22:43.275 ************************************ 00:22:43.275 17:57:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:43.275 * Looking for test storage... 00:22:43.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:43.275 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:43.275 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:43.275 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:43.275 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:43.275 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.275 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.275 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:43.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.276 --rc genhtml_branch_coverage=1 00:22:43.276 --rc genhtml_function_coverage=1 00:22:43.276 --rc genhtml_legend=1 00:22:43.276 --rc geninfo_all_blocks=1 00:22:43.276 --rc geninfo_unexecuted_blocks=1 00:22:43.276 00:22:43.276 ' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:43.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.276 --rc genhtml_branch_coverage=1 00:22:43.276 --rc genhtml_function_coverage=1 00:22:43.276 --rc genhtml_legend=1 00:22:43.276 --rc geninfo_all_blocks=1 00:22:43.276 --rc geninfo_unexecuted_blocks=1 00:22:43.276 00:22:43.276 ' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:43.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.276 --rc genhtml_branch_coverage=1 00:22:43.276 --rc genhtml_function_coverage=1 00:22:43.276 --rc genhtml_legend=1 00:22:43.276 --rc geninfo_all_blocks=1 00:22:43.276 --rc geninfo_unexecuted_blocks=1 00:22:43.276 00:22:43.276 ' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:43.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.276 --rc genhtml_branch_coverage=1 00:22:43.276 --rc genhtml_function_coverage=1 00:22:43.276 --rc genhtml_legend=1 00:22:43.276 --rc geninfo_all_blocks=1 00:22:43.276 --rc geninfo_unexecuted_blocks=1 00:22:43.276 00:22:43.276 ' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:43.276 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89779 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89779 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 89779 ']' 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:43.277 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:43.277 [2024-11-05 17:57:03.150557] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:22:43.277 [2024-11-05 17:57:03.150659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89779 ] 00:22:43.534 [2024-11-05 17:57:03.274583] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:43.534 [2024-11-05 17:57:03.305092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.534 [2024-11-05 17:57:03.325996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.099 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:44.099 17:57:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:22:44.099 17:57:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:44.099 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:44.099 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:44.099 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:44.099 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:44.099 17:57:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:44.356 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:44.356 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:44.356 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:44.356 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:22:44.356 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:22:44.356 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:22:44.356 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:22:44.356 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:44.613 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:22:44.613 { 00:22:44.613 "name": "nvme0n1", 00:22:44.613 "aliases": [ 00:22:44.613 "9094bf13-5153-49bd-821a-f93d9c3af4a5" 00:22:44.613 ], 00:22:44.613 "product_name": "NVMe disk", 00:22:44.614 "block_size": 4096, 00:22:44.614 "num_blocks": 1310720, 00:22:44.614 "uuid": "9094bf13-5153-49bd-821a-f93d9c3af4a5", 00:22:44.614 "numa_id": -1, 00:22:44.614 "assigned_rate_limits": { 00:22:44.614 "rw_ios_per_sec": 0, 00:22:44.614 "rw_mbytes_per_sec": 0, 00:22:44.614 "r_mbytes_per_sec": 0, 00:22:44.614 "w_mbytes_per_sec": 0 00:22:44.614 }, 00:22:44.614 "claimed": true, 00:22:44.614 "claim_type": "read_many_write_one", 00:22:44.614 "zoned": false, 00:22:44.614 "supported_io_types": { 00:22:44.614 "read": true, 00:22:44.614 "write": true, 00:22:44.614 "unmap": true, 00:22:44.614 "flush": true, 00:22:44.614 "reset": true, 00:22:44.614 "nvme_admin": true, 00:22:44.614 "nvme_io": true, 00:22:44.614 "nvme_io_md": false, 00:22:44.614 "write_zeroes": true, 00:22:44.614 "zcopy": false, 00:22:44.614 "get_zone_info": false, 00:22:44.614 "zone_management": false, 00:22:44.614 "zone_append": false, 00:22:44.614 "compare": true, 00:22:44.614 "compare_and_write": false, 00:22:44.614 "abort": true, 00:22:44.614 "seek_hole": false, 00:22:44.614 "seek_data": false, 00:22:44.614 "copy": true, 00:22:44.614 "nvme_iov_md": false 00:22:44.614 }, 00:22:44.614 "driver_specific": { 00:22:44.614 "nvme": [ 00:22:44.614 { 00:22:44.614 "pci_address": "0000:00:11.0", 00:22:44.614 "trid": { 00:22:44.614 "trtype": "PCIe", 00:22:44.614 "traddr": "0000:00:11.0" 00:22:44.614 }, 00:22:44.614 "ctrlr_data": { 00:22:44.614 "cntlid": 0, 00:22:44.614 "vendor_id": "0x1b36", 00:22:44.614 "model_number": "QEMU NVMe Ctrl", 00:22:44.614 "serial_number": "12341", 00:22:44.614 "firmware_revision": "8.0.0", 00:22:44.614 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:44.614 "oacs": { 00:22:44.614 "security": 0, 00:22:44.614 "format": 1, 00:22:44.614 "firmware": 0, 00:22:44.614 "ns_manage": 1 00:22:44.614 }, 00:22:44.614 "multi_ctrlr": false, 00:22:44.614 "ana_reporting": false 00:22:44.614 }, 00:22:44.614 "vs": { 00:22:44.614 "nvme_version": "1.4" 00:22:44.614 }, 00:22:44.614 "ns_data": { 00:22:44.614 "id": 1, 00:22:44.614 "can_share": false 00:22:44.614 } 00:22:44.614 } 00:22:44.614 ], 00:22:44.614 "mp_policy": "active_passive" 00:22:44.614 } 00:22:44.614 } 00:22:44.614 ]' 00:22:44.614 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:22:44.614 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:22:44.614 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:22:44.614 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:22:44.614 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:22:44.614 17:57:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:22:44.614 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:44.614 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:44.614 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:44.614 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:44.614 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:44.872 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=48eece26-898b-44f6-952c-0c08e72728c6 00:22:44.872 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:44.872 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 48eece26-898b-44f6-952c-0c08e72728c6 00:22:45.186 17:57:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0b48809d-b1e7-4ea9-bf23-cb2429ae45dd 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0b48809d-b1e7-4ea9-bf23-cb2429ae45dd 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:22:45.444 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:46.010 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:22:46.010 { 00:22:46.010 "name": "6e2f5678-83bf-4355-af9e-dcfc699a238f", 00:22:46.010 "aliases": [ 00:22:46.010 "lvs/nvme0n1p0" 00:22:46.010 ], 00:22:46.010 "product_name": "Logical Volume", 00:22:46.010 "block_size": 4096, 00:22:46.010 "num_blocks": 26476544, 00:22:46.010 "uuid": "6e2f5678-83bf-4355-af9e-dcfc699a238f", 00:22:46.010 "assigned_rate_limits": { 00:22:46.010 "rw_ios_per_sec": 0, 00:22:46.010 "rw_mbytes_per_sec": 0, 00:22:46.010 "r_mbytes_per_sec": 0, 00:22:46.010 "w_mbytes_per_sec": 0 00:22:46.010 }, 00:22:46.010 "claimed": false, 00:22:46.010 "zoned": false, 00:22:46.010 "supported_io_types": { 00:22:46.010 "read": true, 00:22:46.010 "write": true, 00:22:46.010 "unmap": true, 00:22:46.010 "flush": false, 00:22:46.010 "reset": true, 00:22:46.010 "nvme_admin": false, 00:22:46.010 "nvme_io": false, 00:22:46.010 "nvme_io_md": false, 00:22:46.010 "write_zeroes": true, 00:22:46.010 "zcopy": false, 00:22:46.010 "get_zone_info": false, 00:22:46.010 "zone_management": false, 00:22:46.010 "zone_append": false, 00:22:46.010 "compare": false, 00:22:46.010 "compare_and_write": false, 00:22:46.010 "abort": false, 00:22:46.010 "seek_hole": true, 00:22:46.010 "seek_data": true, 00:22:46.010 "copy": false, 00:22:46.010 "nvme_iov_md": false 00:22:46.010 }, 00:22:46.010 "driver_specific": { 00:22:46.010 "lvol": { 00:22:46.010 "lvol_store_uuid": "0b48809d-b1e7-4ea9-bf23-cb2429ae45dd", 00:22:46.010 "base_bdev": "nvme0n1", 00:22:46.010 "thin_provision": true, 00:22:46.010 "num_allocated_clusters": 0, 00:22:46.010 "snapshot": false, 00:22:46.010 "clone": false, 00:22:46.010 "esnap_clone": false 00:22:46.010 } 00:22:46.010 } 00:22:46.010 } 00:22:46.010 ]' 00:22:46.010 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:22:46.010 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:22:46.010 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:22:46.010 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:22:46.010 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:22:46.010 17:57:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:22:46.010 17:57:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:46.010 17:57:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:46.010 17:57:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:46.268 17:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:46.268 17:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:46.268 17:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:46.268 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:46.268 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:22:46.268 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:22:46.268 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:22:46.268 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:46.268 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:22:46.268 { 00:22:46.268 "name": "6e2f5678-83bf-4355-af9e-dcfc699a238f", 00:22:46.268 "aliases": [ 00:22:46.268 "lvs/nvme0n1p0" 00:22:46.268 ], 00:22:46.268 "product_name": "Logical Volume", 00:22:46.268 "block_size": 4096, 00:22:46.268 "num_blocks": 26476544, 00:22:46.268 "uuid": "6e2f5678-83bf-4355-af9e-dcfc699a238f", 00:22:46.268 "assigned_rate_limits": { 00:22:46.268 "rw_ios_per_sec": 0, 00:22:46.268 "rw_mbytes_per_sec": 0, 00:22:46.268 "r_mbytes_per_sec": 0, 00:22:46.268 "w_mbytes_per_sec": 0 00:22:46.268 }, 00:22:46.269 "claimed": false, 00:22:46.269 "zoned": false, 00:22:46.269 "supported_io_types": { 00:22:46.269 "read": true, 00:22:46.269 "write": true, 00:22:46.269 "unmap": true, 00:22:46.269 "flush": false, 00:22:46.269 "reset": true, 00:22:46.269 "nvme_admin": false, 00:22:46.269 "nvme_io": false, 00:22:46.269 "nvme_io_md": false, 00:22:46.269 "write_zeroes": true, 00:22:46.269 "zcopy": false, 00:22:46.269 "get_zone_info": false, 00:22:46.269 "zone_management": false, 00:22:46.269 "zone_append": false, 00:22:46.269 "compare": false, 00:22:46.269 "compare_and_write": false, 00:22:46.269 "abort": false, 00:22:46.269 "seek_hole": true, 00:22:46.269 "seek_data": true, 00:22:46.269 "copy": false, 00:22:46.269 "nvme_iov_md": false 00:22:46.269 }, 00:22:46.269 "driver_specific": { 00:22:46.269 "lvol": { 00:22:46.269 "lvol_store_uuid": "0b48809d-b1e7-4ea9-bf23-cb2429ae45dd", 00:22:46.269 "base_bdev": "nvme0n1", 00:22:46.269 "thin_provision": true, 00:22:46.269 "num_allocated_clusters": 0, 00:22:46.269 "snapshot": false, 00:22:46.269 "clone": false, 00:22:46.269 "esnap_clone": false 00:22:46.269 } 00:22:46.269 } 00:22:46.269 } 00:22:46.269 ]' 00:22:46.269 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:22:46.269 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:22:46.269 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:22:46.269 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:22:46.269 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:22:46.269 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:22:46.269 17:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:46.269 17:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:46.526 17:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:46.526 17:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:46.526 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:46.526 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:22:46.526 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:22:46.526 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:22:46.526 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e2f5678-83bf-4355-af9e-dcfc699a238f 00:22:46.785 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:22:46.785 { 00:22:46.785 "name": "6e2f5678-83bf-4355-af9e-dcfc699a238f", 00:22:46.785 "aliases": [ 00:22:46.785 "lvs/nvme0n1p0" 00:22:46.785 ], 00:22:46.785 "product_name": "Logical Volume", 00:22:46.785 "block_size": 4096, 00:22:46.785 "num_blocks": 26476544, 00:22:46.785 "uuid": "6e2f5678-83bf-4355-af9e-dcfc699a238f", 00:22:46.785 "assigned_rate_limits": { 00:22:46.785 "rw_ios_per_sec": 0, 00:22:46.785 "rw_mbytes_per_sec": 0, 00:22:46.785 "r_mbytes_per_sec": 0, 00:22:46.785 "w_mbytes_per_sec": 0 00:22:46.785 }, 00:22:46.785 "claimed": false, 00:22:46.785 "zoned": false, 00:22:46.785 "supported_io_types": { 00:22:46.785 "read": true, 00:22:46.785 "write": true, 00:22:46.785 "unmap": true, 00:22:46.785 "flush": false, 00:22:46.785 "reset": true, 00:22:46.785 "nvme_admin": false, 00:22:46.785 "nvme_io": false, 00:22:46.785 "nvme_io_md": false, 00:22:46.785 "write_zeroes": true, 00:22:46.785 "zcopy": false, 00:22:46.785 "get_zone_info": false, 00:22:46.785 "zone_management": false, 00:22:46.785 "zone_append": false, 00:22:46.785 "compare": false, 00:22:46.785 "compare_and_write": false, 00:22:46.785 "abort": false, 00:22:46.785 "seek_hole": true, 00:22:46.785 "seek_data": true, 00:22:46.785 "copy": false, 00:22:46.785 "nvme_iov_md": false 00:22:46.785 }, 00:22:46.785 "driver_specific": { 00:22:46.785 "lvol": { 00:22:46.785 "lvol_store_uuid": "0b48809d-b1e7-4ea9-bf23-cb2429ae45dd", 00:22:46.785 "base_bdev": "nvme0n1", 00:22:46.785 "thin_provision": true, 00:22:46.785 "num_allocated_clusters": 0, 00:22:46.785 "snapshot": false, 00:22:46.785 "clone": false, 00:22:46.785 "esnap_clone": false 00:22:46.785 } 00:22:46.785 } 00:22:46.785 } 00:22:46.785 ]' 00:22:46.785 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:22:46.785 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:22:46.785 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:22:46.785 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:22:46.785 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:22:46.785 17:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:22:46.786 17:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:46.786 17:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6e2f5678-83bf-4355-af9e-dcfc699a238f --l2p_dram_limit 10' 00:22:46.786 17:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:46.786 17:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:46.786 17:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:46.786 17:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6e2f5678-83bf-4355-af9e-dcfc699a238f --l2p_dram_limit 10 -c nvc0n1p0 00:22:47.045 [2024-11-05 17:57:06.940996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.045 [2024-11-05 17:57:06.941060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:47.045 [2024-11-05 17:57:06.941087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:47.045 [2024-11-05 17:57:06.941096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.045 [2024-11-05 17:57:06.941149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.045 [2024-11-05 17:57:06.941159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:47.045 [2024-11-05 17:57:06.941174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:47.045 [2024-11-05 17:57:06.941181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.045 [2024-11-05 17:57:06.941208] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:47.045 [2024-11-05 17:57:06.943770] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:47.045 [2024-11-05 17:57:06.943807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.045 [2024-11-05 17:57:06.943815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:47.045 [2024-11-05 17:57:06.943826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.609 ms 00:22:47.045 [2024-11-05 17:57:06.943834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.045 [2024-11-05 17:57:06.943870] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e2282134-860d-4b07-9569-9c3cb89022d4 00:22:47.045 [2024-11-05 17:57:06.945018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.045 [2024-11-05 17:57:06.945057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:47.045 [2024-11-05 17:57:06.945080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:47.045 [2024-11-05 17:57:06.945090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.045 [2024-11-05 17:57:06.950410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.045 [2024-11-05 17:57:06.950442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:47.045 [2024-11-05 17:57:06.950451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.266 ms 00:22:47.045 [2024-11-05 17:57:06.950464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.045 [2024-11-05 17:57:06.950551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.045 [2024-11-05 17:57:06.950562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:47.045 [2024-11-05 17:57:06.950570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:47.045 [2024-11-05 17:57:06.950579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.045 [2024-11-05 17:57:06.950638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.045 [2024-11-05 17:57:06.950649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:47.045 [2024-11-05 17:57:06.950657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:47.045 [2024-11-05 17:57:06.950666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.045 [2024-11-05 17:57:06.950685] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:47.045 [2024-11-05 17:57:06.952156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.045 [2024-11-05 17:57:06.952182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:47.045 [2024-11-05 17:57:06.952193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:22:47.045 [2024-11-05 17:57:06.952202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.045 [2024-11-05 17:57:06.952233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.045 [2024-11-05 17:57:06.952242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:47.045 [2024-11-05 17:57:06.952255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:47.045 [2024-11-05 17:57:06.952263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.045 [2024-11-05 17:57:06.952282] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:47.045 [2024-11-05 17:57:06.952417] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:47.045 [2024-11-05 17:57:06.952434] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:47.045 [2024-11-05 17:57:06.952448] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:47.045 [2024-11-05 17:57:06.952459] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:47.045 [2024-11-05 17:57:06.952469] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:47.045 [2024-11-05 17:57:06.952486] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:47.045 [2024-11-05 17:57:06.952495] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:47.045 [2024-11-05 17:57:06.952503] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:47.045 [2024-11-05 17:57:06.952511] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:47.045 [2024-11-05 17:57:06.952521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.045 [2024-11-05 17:57:06.952528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:47.045 [2024-11-05 17:57:06.952540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:22:47.045 [2024-11-05 17:57:06.952547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.045 [2024-11-05 17:57:06.952633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.045 [2024-11-05 17:57:06.952640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:47.045 [2024-11-05 17:57:06.952649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:47.045 [2024-11-05 17:57:06.952660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.045 [2024-11-05 17:57:06.952753] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:47.045 [2024-11-05 17:57:06.952766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:47.045 [2024-11-05 17:57:06.952776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.045 [2024-11-05 17:57:06.952783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.045 [2024-11-05 17:57:06.952792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:47.045 [2024-11-05 17:57:06.952799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:47.045 [2024-11-05 17:57:06.952809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:47.045 [2024-11-05 17:57:06.952816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:47.045 [2024-11-05 17:57:06.952824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:47.045 [2024-11-05 17:57:06.952831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.045 [2024-11-05 17:57:06.952839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:47.045 [2024-11-05 17:57:06.952845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:47.045 [2024-11-05 17:57:06.952855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.045 [2024-11-05 17:57:06.952861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:47.045 [2024-11-05 17:57:06.952870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:47.045 [2024-11-05 17:57:06.952876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.045 [2024-11-05 17:57:06.952884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:47.045 [2024-11-05 17:57:06.952890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:47.045 [2024-11-05 17:57:06.952898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.045 [2024-11-05 17:57:06.952905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:47.045 [2024-11-05 17:57:06.952912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:47.045 [2024-11-05 17:57:06.952919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.045 [2024-11-05 17:57:06.952926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:47.045 [2024-11-05 17:57:06.952933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:47.045 [2024-11-05 17:57:06.952941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.045 [2024-11-05 17:57:06.952947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:47.045 [2024-11-05 17:57:06.952955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:47.045 [2024-11-05 17:57:06.952962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.045 [2024-11-05 17:57:06.952972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:47.045 [2024-11-05 17:57:06.952978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:47.045 [2024-11-05 17:57:06.952986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.045 [2024-11-05 17:57:06.952993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:47.045 [2024-11-05 17:57:06.953002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:47.045 [2024-11-05 17:57:06.953009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.045 [2024-11-05 17:57:06.953017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:47.046 [2024-11-05 17:57:06.953023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:47.046 [2024-11-05 17:57:06.953031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.046 [2024-11-05 17:57:06.953038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:47.046 [2024-11-05 17:57:06.953045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:47.046 [2024-11-05 17:57:06.953052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.046 [2024-11-05 17:57:06.953062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:47.046 [2024-11-05 17:57:06.953090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:47.046 [2024-11-05 17:57:06.953098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.046 [2024-11-05 17:57:06.953104] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:47.046 [2024-11-05 17:57:06.953114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:47.046 [2024-11-05 17:57:06.953121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.046 [2024-11-05 17:57:06.953129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.046 [2024-11-05 17:57:06.953137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:47.046 [2024-11-05 17:57:06.953145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:47.046 [2024-11-05 17:57:06.953152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:47.046 [2024-11-05 17:57:06.953160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:47.046 [2024-11-05 17:57:06.953166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:47.046 [2024-11-05 17:57:06.953174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:47.046 [2024-11-05 17:57:06.953184] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:47.046 [2024-11-05 17:57:06.953197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.046 [2024-11-05 17:57:06.953205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:47.046 [2024-11-05 17:57:06.953215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:47.046 [2024-11-05 17:57:06.953221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:47.046 [2024-11-05 17:57:06.953230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:47.046 [2024-11-05 17:57:06.953237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:47.046 [2024-11-05 17:57:06.953247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:47.046 [2024-11-05 17:57:06.953254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:47.046 [2024-11-05 17:57:06.953263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:47.046 [2024-11-05 17:57:06.953270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:47.046 [2024-11-05 17:57:06.953278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:47.046 [2024-11-05 17:57:06.953285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:47.046 [2024-11-05 17:57:06.953294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:47.046 [2024-11-05 17:57:06.953301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:47.046 [2024-11-05 17:57:06.953309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:47.046 [2024-11-05 17:57:06.953316] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:47.046 [2024-11-05 17:57:06.953326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.046 [2024-11-05 17:57:06.953334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:47.046 [2024-11-05 17:57:06.953342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:47.046 [2024-11-05 17:57:06.953350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:47.046 [2024-11-05 17:57:06.953359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:47.046 [2024-11-05 17:57:06.953366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.046 [2024-11-05 17:57:06.953377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:47.046 [2024-11-05 17:57:06.953384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:22:47.046 [2024-11-05 17:57:06.953395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.046 [2024-11-05 17:57:06.953435] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:47.046 [2024-11-05 17:57:06.953446] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:49.574 [2024-11-05 17:57:09.358532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.574 [2024-11-05 17:57:09.358650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:49.574 [2024-11-05 17:57:09.358687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2405.083 ms 00:22:49.574 [2024-11-05 17:57:09.358713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.574 [2024-11-05 17:57:09.368464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.574 [2024-11-05 17:57:09.368523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:49.574 [2024-11-05 17:57:09.368539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.479 ms 00:22:49.574 [2024-11-05 17:57:09.368554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.574 [2024-11-05 17:57:09.368706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.574 [2024-11-05 17:57:09.368729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:49.574 [2024-11-05 17:57:09.368741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:49.574 [2024-11-05 17:57:09.368752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.574 [2024-11-05 17:57:09.377785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.574 [2024-11-05 17:57:09.377832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:49.574 [2024-11-05 17:57:09.377843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.982 ms 00:22:49.574 [2024-11-05 17:57:09.377855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.574 [2024-11-05 17:57:09.377893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.574 [2024-11-05 17:57:09.377903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:49.574 [2024-11-05 17:57:09.377912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:49.574 [2024-11-05 17:57:09.377921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.574 [2024-11-05 17:57:09.378266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.574 [2024-11-05 17:57:09.378291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:49.574 [2024-11-05 17:57:09.378301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:22:49.574 [2024-11-05 17:57:09.378313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.574 [2024-11-05 17:57:09.378426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.574 [2024-11-05 17:57:09.378438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:49.574 [2024-11-05 17:57:09.378447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:49.575 [2024-11-05 17:57:09.378457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.384000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.384040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:49.575 [2024-11-05 17:57:09.384050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.524 ms 00:22:49.575 [2024-11-05 17:57:09.384059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.392392] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:49.575 [2024-11-05 17:57:09.395195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.395225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:49.575 [2024-11-05 17:57:09.395239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.032 ms 00:22:49.575 [2024-11-05 17:57:09.395248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.461874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.461931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:49.575 [2024-11-05 17:57:09.461952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.587 ms 00:22:49.575 [2024-11-05 17:57:09.461961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.462150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.462162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:49.575 [2024-11-05 17:57:09.462173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:22:49.575 [2024-11-05 17:57:09.462180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.465214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.465254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:49.575 [2024-11-05 17:57:09.465270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:22:49.575 [2024-11-05 17:57:09.465279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.467672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.467704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:49.575 [2024-11-05 17:57:09.467716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.355 ms 00:22:49.575 [2024-11-05 17:57:09.467723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.468016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.468037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:49.575 [2024-11-05 17:57:09.468053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:22:49.575 [2024-11-05 17:57:09.468060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.496462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.496501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:49.575 [2024-11-05 17:57:09.496517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.364 ms 00:22:49.575 [2024-11-05 17:57:09.496525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.500213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.500248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:49.575 [2024-11-05 17:57:09.500260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.641 ms 00:22:49.575 [2024-11-05 17:57:09.500268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.503017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.503047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:49.575 [2024-11-05 17:57:09.503059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.712 ms 00:22:49.575 [2024-11-05 17:57:09.503078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.506148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.506182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:49.575 [2024-11-05 17:57:09.506195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:22:49.575 [2024-11-05 17:57:09.506202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.506245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.506255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:49.575 [2024-11-05 17:57:09.506265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:49.575 [2024-11-05 17:57:09.506272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.506339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.575 [2024-11-05 17:57:09.506353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:49.575 [2024-11-05 17:57:09.506368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:49.575 [2024-11-05 17:57:09.506378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.575 [2024-11-05 17:57:09.507240] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2565.872 ms, result 0 00:22:49.575 { 00:22:49.575 "name": "ftl0", 00:22:49.575 "uuid": "e2282134-860d-4b07-9569-9c3cb89022d4" 00:22:49.575 } 00:22:49.575 17:57:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:49.575 17:57:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:49.834 17:57:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:49.834 17:57:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:49.834 17:57:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:50.092 /dev/nbd0 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:50.092 1+0 records in 00:22:50.092 1+0 records out 00:22:50.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260289 s, 15.7 MB/s 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:22:50.092 17:57:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:50.092 [2024-11-05 17:57:10.050743] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:22:50.092 [2024-11-05 17:57:10.050913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89907 ] 00:22:50.348 [2024-11-05 17:57:10.182272] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:50.348 [2024-11-05 17:57:10.210573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.348 [2024-11-05 17:57:10.235770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.722  [2024-11-05T17:57:12.374Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-05T17:57:13.307Z] Copying: 389/1024 [MB] (195 MBps) [2024-11-05T17:57:14.680Z] Copying: 627/1024 [MB] (237 MBps) [2024-11-05T17:57:14.939Z] Copying: 871/1024 [MB] (244 MBps) [2024-11-05T17:57:15.198Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:22:55.207 00:22:55.207 17:57:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:57.735 17:57:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:57.735 [2024-11-05 17:57:17.338092] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:22:57.735 [2024-11-05 17:57:17.338216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89988 ] 00:22:57.735 [2024-11-05 17:57:17.467488] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:57.735 [2024-11-05 17:57:17.493366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.735 [2024-11-05 17:57:17.516374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.669  [2024-11-05T17:57:19.627Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-05T17:57:20.996Z] Copying: 59/1024 [MB] (30 MBps) [2024-11-05T17:57:21.927Z] Copying: 88/1024 [MB] (29 MBps) [2024-11-05T17:57:22.878Z] Copying: 117/1024 [MB] (28 MBps) [2024-11-05T17:57:23.814Z] Copying: 144/1024 [MB] (26 MBps) [2024-11-05T17:57:24.784Z] Copying: 168/1024 [MB] (24 MBps) [2024-11-05T17:57:25.718Z] Copying: 197/1024 [MB] (28 MBps) [2024-11-05T17:57:26.651Z] Copying: 227/1024 [MB] (30 MBps) [2024-11-05T17:57:27.584Z] Copying: 258/1024 [MB] (30 MBps) [2024-11-05T17:57:28.994Z] Copying: 287/1024 [MB] (28 MBps) [2024-11-05T17:57:29.927Z] Copying: 316/1024 [MB] (29 MBps) [2024-11-05T17:57:30.860Z] Copying: 348/1024 [MB] (31 MBps) [2024-11-05T17:57:31.801Z] Copying: 380/1024 [MB] (32 MBps) [2024-11-05T17:57:32.733Z] Copying: 411/1024 [MB] (30 MBps) [2024-11-05T17:57:33.682Z] Copying: 440/1024 [MB] (28 MBps) [2024-11-05T17:57:34.615Z] Copying: 465/1024 [MB] (25 MBps) [2024-11-05T17:57:35.986Z] Copying: 494/1024 [MB] (28 MBps) [2024-11-05T17:57:36.920Z] Copying: 519/1024 [MB] (24 MBps) [2024-11-05T17:57:37.854Z] Copying: 550/1024 [MB] (31 MBps) [2024-11-05T17:57:38.786Z] Copying: 582/1024 [MB] (31 MBps) [2024-11-05T17:57:39.721Z] Copying: 612/1024 [MB] (29 MBps) [2024-11-05T17:57:40.653Z] Copying: 631/1024 [MB] (19 MBps) [2024-11-05T17:57:41.613Z] Copying: 655/1024 [MB] (24 MBps) [2024-11-05T17:57:42.984Z] Copying: 682/1024 [MB] (26 MBps) [2024-11-05T17:57:43.921Z] Copying: 704/1024 [MB] (22 MBps) [2024-11-05T17:57:44.862Z] Copying: 724320/1048576 [kB] (2808 kBps) [2024-11-05T17:57:45.802Z] Copying: 722/1024 [MB] (14 MBps) [2024-11-05T17:57:46.735Z] Copying: 735/1024 [MB] (13 MBps) [2024-11-05T17:57:47.677Z] Copying: 750/1024 [MB] (14 MBps) [2024-11-05T17:57:48.618Z] Copying: 761/1024 [MB] (11 MBps) [2024-11-05T17:57:50.003Z] Copying: 774/1024 [MB] (12 MBps) [2024-11-05T17:57:50.941Z] Copying: 789/1024 [MB] (15 MBps) [2024-11-05T17:57:51.883Z] Copying: 807/1024 [MB] (17 MBps) [2024-11-05T17:57:52.822Z] Copying: 817/1024 [MB] (10 MBps) [2024-11-05T17:57:53.753Z] Copying: 829/1024 [MB] (11 MBps) [2024-11-05T17:57:54.686Z] Copying: 845/1024 [MB] (15 MBps) [2024-11-05T17:57:55.617Z] Copying: 864/1024 [MB] (19 MBps) [2024-11-05T17:57:56.604Z] Copying: 884/1024 [MB] (20 MBps) [2024-11-05T17:57:57.977Z] Copying: 915/1024 [MB] (31 MBps) [2024-11-05T17:57:58.911Z] Copying: 948/1024 [MB] (32 MBps) [2024-11-05T17:57:59.843Z] Copying: 979/1024 [MB] (31 MBps) [2024-11-05T17:58:00.100Z] Copying: 1008/1024 [MB] (28 MBps) [2024-11-05T17:58:00.360Z] Copying: 1024/1024 [MB] (average 24 MBps) 00:23:40.370 00:23:40.370 17:58:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:40.370 17:58:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:40.631 17:58:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:40.893 [2024-11-05 17:58:00.659350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.659420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:40.893 [2024-11-05 17:58:00.659437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:40.893 [2024-11-05 17:58:00.659448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.659475] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:40.893 [2024-11-05 17:58:00.660187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.660221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:40.893 [2024-11-05 17:58:00.660235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:23:40.893 [2024-11-05 17:58:00.660244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.663256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.663298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:40.893 [2024-11-05 17:58:00.663311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.979 ms 00:23:40.893 [2024-11-05 17:58:00.663325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.685856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.685907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:40.893 [2024-11-05 17:58:00.685927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.505 ms 00:23:40.893 [2024-11-05 17:58:00.685935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.692458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.692498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:40.893 [2024-11-05 17:58:00.692512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.472 ms 00:23:40.893 [2024-11-05 17:58:00.692522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.695507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.695552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:40.893 [2024-11-05 17:58:00.695564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:23:40.893 [2024-11-05 17:58:00.695572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.702431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.702481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:40.893 [2024-11-05 17:58:00.702495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.809 ms 00:23:40.893 [2024-11-05 17:58:00.702503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.702636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.702646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:40.893 [2024-11-05 17:58:00.702657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:23:40.893 [2024-11-05 17:58:00.702671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.706010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.706057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:40.893 [2024-11-05 17:58:00.706082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.313 ms 00:23:40.893 [2024-11-05 17:58:00.706089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.708961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.709006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:40.893 [2024-11-05 17:58:00.709022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.818 ms 00:23:40.893 [2024-11-05 17:58:00.709028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.711511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.711555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:40.893 [2024-11-05 17:58:00.711568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.407 ms 00:23:40.893 [2024-11-05 17:58:00.711575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.713905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.893 [2024-11-05 17:58:00.713948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:40.893 [2024-11-05 17:58:00.713960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.253 ms 00:23:40.893 [2024-11-05 17:58:00.713967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.893 [2024-11-05 17:58:00.714011] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:40.893 [2024-11-05 17:58:00.714026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:40.893 [2024-11-05 17:58:00.714243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.714984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:40.894 [2024-11-05 17:58:00.715001] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:40.894 [2024-11-05 17:58:00.715021] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e2282134-860d-4b07-9569-9c3cb89022d4 00:23:40.894 [2024-11-05 17:58:00.715030] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:40.894 [2024-11-05 17:58:00.715040] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:40.894 [2024-11-05 17:58:00.715048] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:40.894 [2024-11-05 17:58:00.715059] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:40.894 [2024-11-05 17:58:00.715078] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:40.894 [2024-11-05 17:58:00.715087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:40.894 [2024-11-05 17:58:00.715095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:40.894 [2024-11-05 17:58:00.715104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:40.894 [2024-11-05 17:58:00.715111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:40.894 [2024-11-05 17:58:00.715120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.894 [2024-11-05 17:58:00.715128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:40.894 [2024-11-05 17:58:00.715139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:23:40.894 [2024-11-05 17:58:00.715148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.894 [2024-11-05 17:58:00.717364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.895 [2024-11-05 17:58:00.717401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:40.895 [2024-11-05 17:58:00.717413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:23:40.895 [2024-11-05 17:58:00.717422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.717542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.895 [2024-11-05 17:58:00.717554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:40.895 [2024-11-05 17:58:00.717571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:40.895 [2024-11-05 17:58:00.717580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.725746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.725801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.895 [2024-11-05 17:58:00.725815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.725829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.725901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.725914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.895 [2024-11-05 17:58:00.725927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.725935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.726001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.726011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.895 [2024-11-05 17:58:00.726021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.726032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.726052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.726060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.895 [2024-11-05 17:58:00.726087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.726097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.742348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.742422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.895 [2024-11-05 17:58:00.742438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.742447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.755304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.755365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.895 [2024-11-05 17:58:00.755384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.755392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.755492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.755503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.895 [2024-11-05 17:58:00.755514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.755524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.755575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.755585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.895 [2024-11-05 17:58:00.755602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.755610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.755699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.755709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.895 [2024-11-05 17:58:00.755720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.755728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.755765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.755774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:40.895 [2024-11-05 17:58:00.755785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.755793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.755844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.755853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.895 [2024-11-05 17:58:00.755864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.755872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.755929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.895 [2024-11-05 17:58:00.755940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.895 [2024-11-05 17:58:00.755951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.895 [2024-11-05 17:58:00.755959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-11-05 17:58:00.756136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 96.720 ms, result 0 00:23:40.895 true 00:23:40.895 17:58:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89779 00:23:40.895 17:58:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89779 00:23:40.895 17:58:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:40.895 [2024-11-05 17:58:00.874258] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:23:40.895 [2024-11-05 17:58:00.874444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90445 ] 00:23:41.155 [2024-11-05 17:58:01.012877] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:41.155 [2024-11-05 17:58:01.039396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.155 [2024-11-05 17:58:01.072129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.537  [2024-11-05T17:58:03.461Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-05T17:58:04.393Z] Copying: 384/1024 [MB] (193 MBps) [2024-11-05T17:58:05.332Z] Copying: 578/1024 [MB] (193 MBps) [2024-11-05T17:58:06.270Z] Copying: 773/1024 [MB] (195 MBps) [2024-11-05T17:58:06.270Z] Copying: 995/1024 [MB] (221 MBps) [2024-11-05T17:58:06.531Z] Copying: 1024/1024 [MB] (average 200 MBps) 00:23:46.540 00:23:46.540 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89779 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:46.540 17:58:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:46.540 [2024-11-05 17:58:06.471721] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:23:46.540 [2024-11-05 17:58:06.471848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90510 ] 00:23:46.799 [2024-11-05 17:58:06.601863] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:46.799 [2024-11-05 17:58:06.629354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.799 [2024-11-05 17:58:06.646694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.799 [2024-11-05 17:58:06.728265] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:46.799 [2024-11-05 17:58:06.728321] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:46.799 [2024-11-05 17:58:06.789956] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:46.799 [2024-11-05 17:58:06.790427] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:46.799 [2024-11-05 17:58:06.790638] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:47.058 [2024-11-05 17:58:06.952883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.058 [2024-11-05 17:58:06.952931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:47.058 [2024-11-05 17:58:06.952945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:47.058 [2024-11-05 17:58:06.952952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.058 [2024-11-05 17:58:06.952995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.058 [2024-11-05 17:58:06.953003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:47.058 [2024-11-05 17:58:06.953010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:47.058 [2024-11-05 17:58:06.953015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.058 [2024-11-05 17:58:06.953031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:47.058 [2024-11-05 17:58:06.953286] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:47.058 [2024-11-05 17:58:06.953308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.058 [2024-11-05 17:58:06.953316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:47.058 [2024-11-05 17:58:06.953324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:23:47.058 [2024-11-05 17:58:06.953330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.058 [2024-11-05 17:58:06.954290] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:47.058 [2024-11-05 17:58:06.956091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.058 [2024-11-05 17:58:06.956117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:47.058 [2024-11-05 17:58:06.956125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.803 ms 00:23:47.058 [2024-11-05 17:58:06.956135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.058 [2024-11-05 17:58:06.956181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.058 [2024-11-05 17:58:06.956189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:47.058 [2024-11-05 17:58:06.956195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:47.058 [2024-11-05 17:58:06.956203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.059 [2024-11-05 17:58:06.960498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.059 [2024-11-05 17:58:06.960527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:47.059 [2024-11-05 17:58:06.960535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.266 ms 00:23:47.059 [2024-11-05 17:58:06.960542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.059 [2024-11-05 17:58:06.960610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.059 [2024-11-05 17:58:06.960617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:47.059 [2024-11-05 17:58:06.960623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:47.059 [2024-11-05 17:58:06.960631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.059 [2024-11-05 17:58:06.960681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.059 [2024-11-05 17:58:06.960688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:47.059 [2024-11-05 17:58:06.960694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:47.059 [2024-11-05 17:58:06.960701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.059 [2024-11-05 17:58:06.960718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:47.059 [2024-11-05 17:58:06.961866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.059 [2024-11-05 17:58:06.961890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:47.059 [2024-11-05 17:58:06.961898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:23:47.059 [2024-11-05 17:58:06.961907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.059 [2024-11-05 17:58:06.961930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.059 [2024-11-05 17:58:06.961937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:47.059 [2024-11-05 17:58:06.961944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:47.059 [2024-11-05 17:58:06.961950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.059 [2024-11-05 17:58:06.961964] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:47.059 [2024-11-05 17:58:06.961979] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:47.059 [2024-11-05 17:58:06.962007] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:47.059 [2024-11-05 17:58:06.962024] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:47.059 [2024-11-05 17:58:06.962116] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:47.059 [2024-11-05 17:58:06.962129] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:47.059 [2024-11-05 17:58:06.962137] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:47.059 [2024-11-05 17:58:06.962145] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:47.059 [2024-11-05 17:58:06.962152] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:47.059 [2024-11-05 17:58:06.962159] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:47.059 [2024-11-05 17:58:06.962165] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:47.059 [2024-11-05 17:58:06.962170] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:47.059 [2024-11-05 17:58:06.962176] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:47.059 [2024-11-05 17:58:06.962184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.059 [2024-11-05 17:58:06.962193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:47.059 [2024-11-05 17:58:06.962199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:23:47.059 [2024-11-05 17:58:06.962207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.059 [2024-11-05 17:58:06.962273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.059 [2024-11-05 17:58:06.962284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:47.059 [2024-11-05 17:58:06.962292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:47.059 [2024-11-05 17:58:06.962298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.059 [2024-11-05 17:58:06.962371] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:47.059 [2024-11-05 17:58:06.962385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:47.059 [2024-11-05 17:58:06.962392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:47.059 [2024-11-05 17:58:06.962398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:47.059 [2024-11-05 17:58:06.962418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:47.059 [2024-11-05 17:58:06.962428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:47.059 [2024-11-05 17:58:06.962434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:47.059 [2024-11-05 17:58:06.962444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:47.059 [2024-11-05 17:58:06.962449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:47.059 [2024-11-05 17:58:06.962458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:47.059 [2024-11-05 17:58:06.962463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:47.059 [2024-11-05 17:58:06.962468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:47.059 [2024-11-05 17:58:06.962473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:47.059 [2024-11-05 17:58:06.962483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:47.059 [2024-11-05 17:58:06.962488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:47.059 [2024-11-05 17:58:06.962501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.059 [2024-11-05 17:58:06.962511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:47.059 [2024-11-05 17:58:06.962517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.059 [2024-11-05 17:58:06.962529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:47.059 [2024-11-05 17:58:06.962535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.059 [2024-11-05 17:58:06.962552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:47.059 [2024-11-05 17:58:06.962559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.059 [2024-11-05 17:58:06.962571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:47.059 [2024-11-05 17:58:06.962577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:47.059 [2024-11-05 17:58:06.962589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:47.059 [2024-11-05 17:58:06.962595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:47.059 [2024-11-05 17:58:06.962600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:47.059 [2024-11-05 17:58:06.962607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:47.059 [2024-11-05 17:58:06.962613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:47.059 [2024-11-05 17:58:06.962618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:47.059 [2024-11-05 17:58:06.962630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:47.059 [2024-11-05 17:58:06.962636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962642] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:47.059 [2024-11-05 17:58:06.962656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:47.059 [2024-11-05 17:58:06.962662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:47.059 [2024-11-05 17:58:06.962668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.059 [2024-11-05 17:58:06.962675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:47.060 [2024-11-05 17:58:06.962681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:47.060 [2024-11-05 17:58:06.962687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:47.060 [2024-11-05 17:58:06.962693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:47.060 [2024-11-05 17:58:06.962699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:47.060 [2024-11-05 17:58:06.962705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:47.060 [2024-11-05 17:58:06.962713] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:47.060 [2024-11-05 17:58:06.962721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:47.060 [2024-11-05 17:58:06.962728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:47.060 [2024-11-05 17:58:06.962734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:47.060 [2024-11-05 17:58:06.962741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:47.060 [2024-11-05 17:58:06.962748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:47.060 [2024-11-05 17:58:06.962754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:47.060 [2024-11-05 17:58:06.962762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:47.060 [2024-11-05 17:58:06.962768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:47.060 [2024-11-05 17:58:06.962774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:47.060 [2024-11-05 17:58:06.962781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:47.060 [2024-11-05 17:58:06.962787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:47.060 [2024-11-05 17:58:06.962793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:47.060 [2024-11-05 17:58:06.962799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:47.060 [2024-11-05 17:58:06.962806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:47.060 [2024-11-05 17:58:06.962826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:47.060 [2024-11-05 17:58:06.962833] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:47.060 [2024-11-05 17:58:06.962843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:47.060 [2024-11-05 17:58:06.962850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:47.060 [2024-11-05 17:58:06.962857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:47.060 [2024-11-05 17:58:06.962863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:47.060 [2024-11-05 17:58:06.962870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:47.060 [2024-11-05 17:58:06.962876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:06.962887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:47.060 [2024-11-05 17:58:06.962895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:23:47.060 [2024-11-05 17:58:06.962901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:06.970818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:06.970844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:47.060 [2024-11-05 17:58:06.970856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.875 ms 00:23:47.060 [2024-11-05 17:58:06.970862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:06.970925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:06.970934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:47.060 [2024-11-05 17:58:06.970939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:47.060 [2024-11-05 17:58:06.970946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.000041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:07.000107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:47.060 [2024-11-05 17:58:07.000122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.052 ms 00:23:47.060 [2024-11-05 17:58:07.000131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.000197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:07.000208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:47.060 [2024-11-05 17:58:07.000218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:47.060 [2024-11-05 17:58:07.000226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.000640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:07.000670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:47.060 [2024-11-05 17:58:07.000681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:23:47.060 [2024-11-05 17:58:07.000691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.000848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:07.000868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:47.060 [2024-11-05 17:58:07.000879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:23:47.060 [2024-11-05 17:58:07.000889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.006258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:07.006298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:47.060 [2024-11-05 17:58:07.006309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.349 ms 00:23:47.060 [2024-11-05 17:58:07.006322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.008659] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:47.060 [2024-11-05 17:58:07.008697] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:47.060 [2024-11-05 17:58:07.008717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:07.008727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:47.060 [2024-11-05 17:58:07.008736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.294 ms 00:23:47.060 [2024-11-05 17:58:07.008745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.020435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:07.020474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:47.060 [2024-11-05 17:58:07.020486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.400 ms 00:23:47.060 [2024-11-05 17:58:07.020503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.021931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:07.021961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:47.060 [2024-11-05 17:58:07.021969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:23:47.060 [2024-11-05 17:58:07.021975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.023184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:07.023208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:47.060 [2024-11-05 17:58:07.023215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:23:47.060 [2024-11-05 17:58:07.023220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.023478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:07.023498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:47.060 [2024-11-05 17:58:07.023505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:23:47.060 [2024-11-05 17:58:07.023511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.036411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.060 [2024-11-05 17:58:07.036447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:47.060 [2024-11-05 17:58:07.036456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.888 ms 00:23:47.060 [2024-11-05 17:58:07.036463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.060 [2024-11-05 17:58:07.042265] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:47.060 [2024-11-05 17:58:07.044161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.061 [2024-11-05 17:58:07.044181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:47.061 [2024-11-05 17:58:07.044195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.668 ms 00:23:47.061 [2024-11-05 17:58:07.044205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.061 [2024-11-05 17:58:07.044249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.061 [2024-11-05 17:58:07.044257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:47.061 [2024-11-05 17:58:07.044267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:47.061 [2024-11-05 17:58:07.044273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.061 [2024-11-05 17:58:07.044328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.061 [2024-11-05 17:58:07.044340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:47.061 [2024-11-05 17:58:07.044347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:47.061 [2024-11-05 17:58:07.044352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.061 [2024-11-05 17:58:07.044368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.061 [2024-11-05 17:58:07.044374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:47.061 [2024-11-05 17:58:07.044380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:47.061 [2024-11-05 17:58:07.044385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.061 [2024-11-05 17:58:07.044412] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:47.061 [2024-11-05 17:58:07.044426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.061 [2024-11-05 17:58:07.044432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:47.061 [2024-11-05 17:58:07.044441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:47.061 [2024-11-05 17:58:07.044449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.061 [2024-11-05 17:58:07.047404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.061 [2024-11-05 17:58:07.047427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:47.061 [2024-11-05 17:58:07.047434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.941 ms 00:23:47.061 [2024-11-05 17:58:07.047440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.061 [2024-11-05 17:58:07.047498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.061 [2024-11-05 17:58:07.047506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:47.061 [2024-11-05 17:58:07.047512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:47.061 [2024-11-05 17:58:07.047519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.061 [2024-11-05 17:58:07.048647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.427 ms, result 0 00:23:48.444  [2024-11-05T17:58:09.379Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-05T17:58:10.370Z] Copying: 41/1024 [MB] (23 MBps) [2024-11-05T17:58:11.309Z] Copying: 55/1024 [MB] (13 MBps) [2024-11-05T17:58:12.248Z] Copying: 69/1024 [MB] (14 MBps) [2024-11-05T17:58:13.192Z] Copying: 89/1024 [MB] (19 MBps) [2024-11-05T17:58:14.134Z] Copying: 120/1024 [MB] (30 MBps) [2024-11-05T17:58:15.076Z] Copying: 135/1024 [MB] (15 MBps) [2024-11-05T17:58:16.459Z] Copying: 145/1024 [MB] (10 MBps) [2024-11-05T17:58:17.425Z] Copying: 159/1024 [MB] (13 MBps) [2024-11-05T17:58:18.366Z] Copying: 171/1024 [MB] (11 MBps) [2024-11-05T17:58:19.307Z] Copying: 187/1024 [MB] (16 MBps) [2024-11-05T17:58:20.248Z] Copying: 207/1024 [MB] (19 MBps) [2024-11-05T17:58:21.193Z] Copying: 224/1024 [MB] (17 MBps) [2024-11-05T17:58:22.135Z] Copying: 244/1024 [MB] (19 MBps) [2024-11-05T17:58:23.075Z] Copying: 265/1024 [MB] (21 MBps) [2024-11-05T17:58:24.460Z] Copying: 290/1024 [MB] (24 MBps) [2024-11-05T17:58:25.400Z] Copying: 313/1024 [MB] (22 MBps) [2024-11-05T17:58:26.332Z] Copying: 340/1024 [MB] (27 MBps) [2024-11-05T17:58:27.267Z] Copying: 385/1024 [MB] (44 MBps) [2024-11-05T17:58:28.206Z] Copying: 424/1024 [MB] (39 MBps) [2024-11-05T17:58:29.148Z] Copying: 455/1024 [MB] (30 MBps) [2024-11-05T17:58:30.089Z] Copying: 476/1024 [MB] (21 MBps) [2024-11-05T17:58:31.464Z] Copying: 497/1024 [MB] (20 MBps) [2024-11-05T17:58:32.397Z] Copying: 541/1024 [MB] (43 MBps) [2024-11-05T17:58:33.329Z] Copying: 583/1024 [MB] (42 MBps) [2024-11-05T17:58:34.266Z] Copying: 627/1024 [MB] (44 MBps) [2024-11-05T17:58:35.201Z] Copying: 672/1024 [MB] (45 MBps) [2024-11-05T17:58:36.134Z] Copying: 716/1024 [MB] (43 MBps) [2024-11-05T17:58:37.098Z] Copying: 762/1024 [MB] (45 MBps) [2024-11-05T17:58:38.469Z] Copying: 806/1024 [MB] (43 MBps) [2024-11-05T17:58:39.402Z] Copying: 851/1024 [MB] (45 MBps) [2024-11-05T17:58:40.335Z] Copying: 895/1024 [MB] (43 MBps) [2024-11-05T17:58:41.268Z] Copying: 939/1024 [MB] (43 MBps) [2024-11-05T17:58:42.200Z] Copying: 984/1024 [MB] (45 MBps) [2024-11-05T17:58:43.134Z] Copying: 1023/1024 [MB] (38 MBps) [2024-11-05T17:58:43.134Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-05 17:58:43.024961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.025161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:23.143 [2024-11-05 17:58:43.025234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:23.143 [2024-11-05 17:58:43.025261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.028155] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:23.143 [2024-11-05 17:58:43.029677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.029706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:23.143 [2024-11-05 17:58:43.029717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:24:23.143 [2024-11-05 17:58:43.029725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.041688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.041716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:23.143 [2024-11-05 17:58:43.041726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.949 ms 00:24:23.143 [2024-11-05 17:58:43.041735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.060305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.060338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:23.143 [2024-11-05 17:58:43.060348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.556 ms 00:24:23.143 [2024-11-05 17:58:43.060356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.066502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.066541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:23.143 [2024-11-05 17:58:43.066552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.121 ms 00:24:23.143 [2024-11-05 17:58:43.066561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.067860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.067888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:23.143 [2024-11-05 17:58:43.067898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:24:23.143 [2024-11-05 17:58:43.067906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.071800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.071830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:23.143 [2024-11-05 17:58:43.071839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.866 ms 00:24:23.143 [2024-11-05 17:58:43.071847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.124091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.124127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:23.143 [2024-11-05 17:58:43.124137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.212 ms 00:24:23.143 [2024-11-05 17:58:43.124153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.125888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.125913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:23.143 [2024-11-05 17:58:43.125922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:24:23.143 [2024-11-05 17:58:43.125929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.127108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.127134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:23.143 [2024-11-05 17:58:43.127143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:24:23.143 [2024-11-05 17:58:43.127150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.127991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.128017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:23.143 [2024-11-05 17:58:43.128025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:24:23.143 [2024-11-05 17:58:43.128033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.128915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.143 [2024-11-05 17:58:43.128943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:23.143 [2024-11-05 17:58:43.128952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:24:23.143 [2024-11-05 17:58:43.128959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.143 [2024-11-05 17:58:43.128985] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:23.143 [2024-11-05 17:58:43.129004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127744 / 261120 wr_cnt: 1 state: open 00:24:23.143 [2024-11-05 17:58:43.129014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:23.143 [2024-11-05 17:58:43.129171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:23.144 [2024-11-05 17:58:43.129794] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:23.144 [2024-11-05 17:58:43.129808] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e2282134-860d-4b07-9569-9c3cb89022d4 00:24:23.144 [2024-11-05 17:58:43.129817] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127744 00:24:23.144 [2024-11-05 17:58:43.129825] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128704 00:24:23.144 [2024-11-05 17:58:43.129832] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127744 00:24:23.144 [2024-11-05 17:58:43.129840] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:24:23.144 [2024-11-05 17:58:43.129847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:23.144 [2024-11-05 17:58:43.129855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:23.144 [2024-11-05 17:58:43.129863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:23.144 [2024-11-05 17:58:43.129870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:23.144 [2024-11-05 17:58:43.129882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:23.144 [2024-11-05 17:58:43.129890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.144 [2024-11-05 17:58:43.129897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:23.145 [2024-11-05 17:58:43.129911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:24:23.145 [2024-11-05 17:58:43.129918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.145 [2024-11-05 17:58:43.131624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.145 [2024-11-05 17:58:43.131642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:23.145 [2024-11-05 17:58:43.131652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:24:23.145 [2024-11-05 17:58:43.131660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.145 [2024-11-05 17:58:43.131768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.145 [2024-11-05 17:58:43.131780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:23.145 [2024-11-05 17:58:43.131789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:23.145 [2024-11-05 17:58:43.131798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.402 [2024-11-05 17:58:43.137674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.137708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:23.403 [2024-11-05 17:58:43.137718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.137726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.137798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.137808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:23.403 [2024-11-05 17:58:43.137817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.137825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.137865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.137875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:23.403 [2024-11-05 17:58:43.137884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.137892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.137907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.137918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:23.403 [2024-11-05 17:58:43.137926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.137934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.149036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.149085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:23.403 [2024-11-05 17:58:43.149104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.149112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.157970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.158012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:23.403 [2024-11-05 17:58:43.158023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.158031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.158094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.158104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:23.403 [2024-11-05 17:58:43.158113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.158121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.158146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.158154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:23.403 [2024-11-05 17:58:43.158171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.158179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.158244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.158254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:23.403 [2024-11-05 17:58:43.158263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.158274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.158302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.158312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:23.403 [2024-11-05 17:58:43.158320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.158331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.158369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.158379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:23.403 [2024-11-05 17:58:43.158387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.158396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.158440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:23.403 [2024-11-05 17:58:43.158449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:23.403 [2024-11-05 17:58:43.158461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:23.403 [2024-11-05 17:58:43.158469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.403 [2024-11-05 17:58:43.158594] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 136.727 ms, result 0 00:24:25.302 00:24:25.302 00:24:25.302 17:58:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:26.673 17:58:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:26.673 [2024-11-05 17:58:46.545952] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:24:26.673 [2024-11-05 17:58:46.546284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90913 ] 00:24:26.931 [2024-11-05 17:58:46.675310] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:26.931 [2024-11-05 17:58:46.703392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.931 [2024-11-05 17:58:46.727784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.931 [2024-11-05 17:58:46.830580] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:26.931 [2024-11-05 17:58:46.830648] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:27.190 [2024-11-05 17:58:46.985188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.985237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:27.190 [2024-11-05 17:58:46.985252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:27.190 [2024-11-05 17:58:46.985260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.985309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.985319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:27.190 [2024-11-05 17:58:46.985328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:27.190 [2024-11-05 17:58:46.985335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.985357] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:27.190 [2024-11-05 17:58:46.985611] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:27.190 [2024-11-05 17:58:46.985630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.985638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:27.190 [2024-11-05 17:58:46.985653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:24:27.190 [2024-11-05 17:58:46.985661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.986999] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:27.190 [2024-11-05 17:58:46.989446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.989476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:27.190 [2024-11-05 17:58:46.989493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.449 ms 00:24:27.190 [2024-11-05 17:58:46.989506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.989570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.989580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:27.190 [2024-11-05 17:58:46.989589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:27.190 [2024-11-05 17:58:46.989596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.996005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.996031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:27.190 [2024-11-05 17:58:46.996048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.360 ms 00:24:27.190 [2024-11-05 17:58:46.996055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.996156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.996167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:27.190 [2024-11-05 17:58:46.996175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:27.190 [2024-11-05 17:58:46.996183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.996229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.996240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:27.190 [2024-11-05 17:58:46.996249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:27.190 [2024-11-05 17:58:46.996262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.996289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:27.190 [2024-11-05 17:58:46.997892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.997917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:27.190 [2024-11-05 17:58:46.997927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:24:27.190 [2024-11-05 17:58:46.997937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.997966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.997976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:27.190 [2024-11-05 17:58:46.997985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:27.190 [2024-11-05 17:58:46.998002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.998023] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:27.190 [2024-11-05 17:58:46.998043] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:27.190 [2024-11-05 17:58:46.998095] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:27.190 [2024-11-05 17:58:46.998115] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:27.190 [2024-11-05 17:58:46.998224] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:27.190 [2024-11-05 17:58:46.998236] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:27.190 [2024-11-05 17:58:46.998252] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:27.190 [2024-11-05 17:58:46.998266] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:27.190 [2024-11-05 17:58:46.998275] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:27.190 [2024-11-05 17:58:46.998284] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:27.190 [2024-11-05 17:58:46.998291] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:27.190 [2024-11-05 17:58:46.998298] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:27.190 [2024-11-05 17:58:46.998306] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:27.190 [2024-11-05 17:58:46.998314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.998321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:27.190 [2024-11-05 17:58:46.998330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:24:27.190 [2024-11-05 17:58:46.998337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.998422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.998431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:27.190 [2024-11-05 17:58:46.998439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:27.190 [2024-11-05 17:58:46.998446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:46.998560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:27.190 [2024-11-05 17:58:46.998574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:27.190 [2024-11-05 17:58:46.998583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.190 [2024-11-05 17:58:46.998595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:27.190 [2024-11-05 17:58:46.998609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:27.190 [2024-11-05 17:58:46.998626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:27.190 [2024-11-05 17:58:46.998638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.190 [2024-11-05 17:58:46.998653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:27.190 [2024-11-05 17:58:46.998659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:27.190 [2024-11-05 17:58:46.998665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.190 [2024-11-05 17:58:46.998672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:27.190 [2024-11-05 17:58:46.998679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:27.190 [2024-11-05 17:58:46.998686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:27.190 [2024-11-05 17:58:46.998702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:27.190 [2024-11-05 17:58:46.998709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:27.190 [2024-11-05 17:58:46.998723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.190 [2024-11-05 17:58:46.998736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:27.190 [2024-11-05 17:58:46.998742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.190 [2024-11-05 17:58:46.998758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:27.190 [2024-11-05 17:58:46.998764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.190 [2024-11-05 17:58:46.998779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:27.190 [2024-11-05 17:58:46.998787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.190 [2024-11-05 17:58:46.998800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:27.190 [2024-11-05 17:58:46.998819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.190 [2024-11-05 17:58:46.998833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:27.190 [2024-11-05 17:58:46.998840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:27.190 [2024-11-05 17:58:46.998847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.190 [2024-11-05 17:58:46.998854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:27.190 [2024-11-05 17:58:46.998861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:27.190 [2024-11-05 17:58:46.998867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:27.190 [2024-11-05 17:58:46.998882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:27.190 [2024-11-05 17:58:46.998889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998896] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:27.190 [2024-11-05 17:58:46.998909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:27.190 [2024-11-05 17:58:46.998916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.190 [2024-11-05 17:58:46.998923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.190 [2024-11-05 17:58:46.998931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:27.190 [2024-11-05 17:58:46.998940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:27.190 [2024-11-05 17:58:46.998948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:27.190 [2024-11-05 17:58:46.998954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:27.190 [2024-11-05 17:58:46.998961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:27.190 [2024-11-05 17:58:46.998968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:27.190 [2024-11-05 17:58:46.998976] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:27.190 [2024-11-05 17:58:46.998985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.190 [2024-11-05 17:58:46.998994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:27.190 [2024-11-05 17:58:46.999001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:27.190 [2024-11-05 17:58:46.999007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:27.190 [2024-11-05 17:58:46.999014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:27.190 [2024-11-05 17:58:46.999025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:27.190 [2024-11-05 17:58:46.999033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:27.190 [2024-11-05 17:58:46.999040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:27.190 [2024-11-05 17:58:46.999047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:27.190 [2024-11-05 17:58:46.999055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:27.190 [2024-11-05 17:58:46.999104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:27.190 [2024-11-05 17:58:46.999114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:27.190 [2024-11-05 17:58:46.999122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:27.190 [2024-11-05 17:58:46.999130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:27.190 [2024-11-05 17:58:46.999138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:27.190 [2024-11-05 17:58:46.999145] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:27.190 [2024-11-05 17:58:46.999158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.190 [2024-11-05 17:58:46.999167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:27.190 [2024-11-05 17:58:46.999175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:27.190 [2024-11-05 17:58:46.999183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:27.190 [2024-11-05 17:58:46.999191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:27.190 [2024-11-05 17:58:46.999198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:46.999206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:27.190 [2024-11-05 17:58:46.999213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:24:27.190 [2024-11-05 17:58:46.999226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:47.010725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.190 [2024-11-05 17:58:47.010753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:27.190 [2024-11-05 17:58:47.010765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.451 ms 00:24:27.190 [2024-11-05 17:58:47.010779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.190 [2024-11-05 17:58:47.010872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.010883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:27.191 [2024-11-05 17:58:47.010897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:27.191 [2024-11-05 17:58:47.010905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.037253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.037338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:27.191 [2024-11-05 17:58:47.037376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.286 ms 00:24:27.191 [2024-11-05 17:58:47.037405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.037527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.037560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:27.191 [2024-11-05 17:58:47.037588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:27.191 [2024-11-05 17:58:47.037626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.038369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.038429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:27.191 [2024-11-05 17:58:47.038459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:24:27.191 [2024-11-05 17:58:47.038491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.038898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.038938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:27.191 [2024-11-05 17:58:47.038962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:24:27.191 [2024-11-05 17:58:47.038985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.045642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.045670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:27.191 [2024-11-05 17:58:47.045681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.607 ms 00:24:27.191 [2024-11-05 17:58:47.045689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.048446] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:27.191 [2024-11-05 17:58:47.048484] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:27.191 [2024-11-05 17:58:47.048498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.048506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:27.191 [2024-11-05 17:58:47.048515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.727 ms 00:24:27.191 [2024-11-05 17:58:47.048522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.063304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.063336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:27.191 [2024-11-05 17:58:47.063349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.692 ms 00:24:27.191 [2024-11-05 17:58:47.063357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.064879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.064906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:27.191 [2024-11-05 17:58:47.064915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:24:27.191 [2024-11-05 17:58:47.064923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.066404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.066430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:27.191 [2024-11-05 17:58:47.066438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:24:27.191 [2024-11-05 17:58:47.066445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.066764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.066781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:27.191 [2024-11-05 17:58:47.066790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:24:27.191 [2024-11-05 17:58:47.066800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.085615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.085654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:27.191 [2024-11-05 17:58:47.085665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.777 ms 00:24:27.191 [2024-11-05 17:58:47.085674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.093303] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:27.191 [2024-11-05 17:58:47.096089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.096114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:27.191 [2024-11-05 17:58:47.096127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.372 ms 00:24:27.191 [2024-11-05 17:58:47.096136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.096198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.096212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:27.191 [2024-11-05 17:58:47.096222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:27.191 [2024-11-05 17:58:47.096230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.097990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.098017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:27.191 [2024-11-05 17:58:47.098034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.715 ms 00:24:27.191 [2024-11-05 17:58:47.098042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.098079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.098088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:27.191 [2024-11-05 17:58:47.098097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:27.191 [2024-11-05 17:58:47.098105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.098146] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:27.191 [2024-11-05 17:58:47.098157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.098165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:27.191 [2024-11-05 17:58:47.098175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:27.191 [2024-11-05 17:58:47.098183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.101639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.101668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:27.191 [2024-11-05 17:58:47.101678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.439 ms 00:24:27.191 [2024-11-05 17:58:47.101686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.101758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.191 [2024-11-05 17:58:47.101773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:27.191 [2024-11-05 17:58:47.101782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:27.191 [2024-11-05 17:58:47.101795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.191 [2024-11-05 17:58:47.102787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.167 ms, result 0 00:24:28.566  [2024-11-05T17:58:49.501Z] Copying: 1064/1048576 [kB] (1064 kBps) [2024-11-05T17:58:50.435Z] Copying: 5512/1048576 [kB] (4448 kBps) [2024-11-05T17:58:51.370Z] Copying: 51/1024 [MB] (46 MBps) [2024-11-05T17:58:52.303Z] Copying: 102/1024 [MB] (50 MBps) [2024-11-05T17:58:53.676Z] Copying: 153/1024 [MB] (50 MBps) [2024-11-05T17:58:54.610Z] Copying: 206/1024 [MB] (52 MBps) [2024-11-05T17:58:55.546Z] Copying: 259/1024 [MB] (53 MBps) [2024-11-05T17:58:56.479Z] Copying: 311/1024 [MB] (52 MBps) [2024-11-05T17:58:57.412Z] Copying: 362/1024 [MB] (50 MBps) [2024-11-05T17:58:58.347Z] Copying: 414/1024 [MB] (51 MBps) [2024-11-05T17:58:59.281Z] Copying: 466/1024 [MB] (51 MBps) [2024-11-05T17:59:00.653Z] Copying: 515/1024 [MB] (49 MBps) [2024-11-05T17:59:01.587Z] Copying: 565/1024 [MB] (49 MBps) [2024-11-05T17:59:02.521Z] Copying: 616/1024 [MB] (51 MBps) [2024-11-05T17:59:03.455Z] Copying: 669/1024 [MB] (52 MBps) [2024-11-05T17:59:04.389Z] Copying: 720/1024 [MB] (51 MBps) [2024-11-05T17:59:05.320Z] Copying: 773/1024 [MB] (53 MBps) [2024-11-05T17:59:06.693Z] Copying: 827/1024 [MB] (53 MBps) [2024-11-05T17:59:07.625Z] Copying: 879/1024 [MB] (52 MBps) [2024-11-05T17:59:08.591Z] Copying: 928/1024 [MB] (48 MBps) [2024-11-05T17:59:09.158Z] Copying: 980/1024 [MB] (52 MBps) [2024-11-05T17:59:09.158Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-05 17:59:09.132954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.167 [2024-11-05 17:59:09.133015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:49.167 [2024-11-05 17:59:09.133030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:49.167 [2024-11-05 17:59:09.133040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.167 [2024-11-05 17:59:09.133089] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:49.167 [2024-11-05 17:59:09.133578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.168 [2024-11-05 17:59:09.133615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:49.168 [2024-11-05 17:59:09.133626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:24:49.168 [2024-11-05 17:59:09.133641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.168 [2024-11-05 17:59:09.133897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.168 [2024-11-05 17:59:09.133913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:49.168 [2024-11-05 17:59:09.133923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:24:49.168 [2024-11-05 17:59:09.133931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.168 [2024-11-05 17:59:09.146557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.168 [2024-11-05 17:59:09.146585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:49.168 [2024-11-05 17:59:09.146601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.604 ms 00:24:49.168 [2024-11-05 17:59:09.146609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.168 [2024-11-05 17:59:09.152762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.168 [2024-11-05 17:59:09.152788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:49.168 [2024-11-05 17:59:09.152798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.129 ms 00:24:49.168 [2024-11-05 17:59:09.152806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.168 [2024-11-05 17:59:09.153974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.168 [2024-11-05 17:59:09.154003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:49.168 [2024-11-05 17:59:09.154013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.134 ms 00:24:49.168 [2024-11-05 17:59:09.154021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.168 [2024-11-05 17:59:09.157263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.168 [2024-11-05 17:59:09.157290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:49.168 [2024-11-05 17:59:09.157304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:24:49.168 [2024-11-05 17:59:09.157311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.168 [2024-11-05 17:59:09.158569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.168 [2024-11-05 17:59:09.158596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:49.168 [2024-11-05 17:59:09.158605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:24:49.168 [2024-11-05 17:59:09.158619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.168 [2024-11-05 17:59:09.160306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.168 [2024-11-05 17:59:09.160332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:49.168 [2024-11-05 17:59:09.160341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:24:49.168 [2024-11-05 17:59:09.160348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.427 [2024-11-05 17:59:09.161525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.427 [2024-11-05 17:59:09.161552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:49.427 [2024-11-05 17:59:09.161560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.141 ms 00:24:49.427 [2024-11-05 17:59:09.161567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.427 [2024-11-05 17:59:09.162540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.427 [2024-11-05 17:59:09.162566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:49.427 [2024-11-05 17:59:09.162574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:24:49.427 [2024-11-05 17:59:09.162581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.427 [2024-11-05 17:59:09.163378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.427 [2024-11-05 17:59:09.163404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:49.427 [2024-11-05 17:59:09.163412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:24:49.427 [2024-11-05 17:59:09.163418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.427 [2024-11-05 17:59:09.163443] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:49.427 [2024-11-05 17:59:09.163455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:49.427 [2024-11-05 17:59:09.163465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:49.427 [2024-11-05 17:59:09.163473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:49.427 [2024-11-05 17:59:09.163602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.163996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:49.428 [2024-11-05 17:59:09.164200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:49.429 [2024-11-05 17:59:09.164377] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:49.429 [2024-11-05 17:59:09.164392] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e2282134-860d-4b07-9569-9c3cb89022d4 00:24:49.429 [2024-11-05 17:59:09.164413] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:49.429 [2024-11-05 17:59:09.164426] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136896 00:24:49.429 [2024-11-05 17:59:09.164437] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134912 00:24:49.429 [2024-11-05 17:59:09.164448] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:24:49.429 [2024-11-05 17:59:09.164455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:49.429 [2024-11-05 17:59:09.164463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:49.429 [2024-11-05 17:59:09.164470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:49.429 [2024-11-05 17:59:09.164476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:49.429 [2024-11-05 17:59:09.164482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:49.429 [2024-11-05 17:59:09.164490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.429 [2024-11-05 17:59:09.164497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:49.429 [2024-11-05 17:59:09.164505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:24:49.429 [2024-11-05 17:59:09.164512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.429 [2024-11-05 17:59:09.165849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.429 [2024-11-05 17:59:09.165872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:49.429 [2024-11-05 17:59:09.165882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:24:49.429 [2024-11-05 17:59:09.165889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.429 [2024-11-05 17:59:09.165961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.429 [2024-11-05 17:59:09.165968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:49.429 [2024-11-05 17:59:09.165981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:49.429 [2024-11-05 17:59:09.165988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.429 [2024-11-05 17:59:09.170607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.429 [2024-11-05 17:59:09.170633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.429 [2024-11-05 17:59:09.170643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.429 [2024-11-05 17:59:09.170650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.429 [2024-11-05 17:59:09.170700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.429 [2024-11-05 17:59:09.170708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.429 [2024-11-05 17:59:09.170718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.429 [2024-11-05 17:59:09.170725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.429 [2024-11-05 17:59:09.170778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.429 [2024-11-05 17:59:09.170787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.429 [2024-11-05 17:59:09.170795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.429 [2024-11-05 17:59:09.170805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.429 [2024-11-05 17:59:09.170833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.429 [2024-11-05 17:59:09.170843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.429 [2024-11-05 17:59:09.170851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.429 [2024-11-05 17:59:09.170860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.429 [2024-11-05 17:59:09.179025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.429 [2024-11-05 17:59:09.179060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.429 [2024-11-05 17:59:09.179079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.429 [2024-11-05 17:59:09.179087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.429 [2024-11-05 17:59:09.185759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.429 [2024-11-05 17:59:09.185791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.429 [2024-11-05 17:59:09.185807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.429 [2024-11-05 17:59:09.185815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.429 [2024-11-05 17:59:09.185838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.429 [2024-11-05 17:59:09.185846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.429 [2024-11-05 17:59:09.185853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.429 [2024-11-05 17:59:09.185861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.429 [2024-11-05 17:59:09.185899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.429 [2024-11-05 17:59:09.185907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.429 [2024-11-05 17:59:09.185915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.429 [2024-11-05 17:59:09.185922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.429 [2024-11-05 17:59:09.185981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.430 [2024-11-05 17:59:09.185990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.430 [2024-11-05 17:59:09.185998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.430 [2024-11-05 17:59:09.186005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.430 [2024-11-05 17:59:09.186037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.430 [2024-11-05 17:59:09.186046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:49.430 [2024-11-05 17:59:09.186054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.430 [2024-11-05 17:59:09.186061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.430 [2024-11-05 17:59:09.186106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.430 [2024-11-05 17:59:09.186117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.430 [2024-11-05 17:59:09.186125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.430 [2024-11-05 17:59:09.186132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.430 [2024-11-05 17:59:09.186169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.430 [2024-11-05 17:59:09.186178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.430 [2024-11-05 17:59:09.186185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.430 [2024-11-05 17:59:09.186192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.430 [2024-11-05 17:59:09.186304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.333 ms, result 0 00:24:49.430 00:24:49.430 00:24:49.430 17:59:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:51.974 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:51.974 17:59:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:51.974 [2024-11-05 17:59:11.466053] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:24:51.974 [2024-11-05 17:59:11.466185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91179 ] 00:24:51.974 [2024-11-05 17:59:11.594635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:51.974 [2024-11-05 17:59:11.625620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.974 [2024-11-05 17:59:11.644587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.974 [2024-11-05 17:59:11.728949] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:51.974 [2024-11-05 17:59:11.729012] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:51.974 [2024-11-05 17:59:11.881372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.974 [2024-11-05 17:59:11.881424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:51.974 [2024-11-05 17:59:11.881437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:51.974 [2024-11-05 17:59:11.881445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.974 [2024-11-05 17:59:11.881485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.974 [2024-11-05 17:59:11.881497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:51.974 [2024-11-05 17:59:11.881506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:51.974 [2024-11-05 17:59:11.881518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.974 [2024-11-05 17:59:11.881538] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:51.974 [2024-11-05 17:59:11.881760] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:51.974 [2024-11-05 17:59:11.881774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.974 [2024-11-05 17:59:11.881785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:51.974 [2024-11-05 17:59:11.881795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:24:51.974 [2024-11-05 17:59:11.881803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-11-05 17:59:11.882865] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:51.975 [2024-11-05 17:59:11.884951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-11-05 17:59:11.884986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:51.975 [2024-11-05 17:59:11.884997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:24:51.975 [2024-11-05 17:59:11.885011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-11-05 17:59:11.885083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-11-05 17:59:11.885094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:51.975 [2024-11-05 17:59:11.885102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:51.975 [2024-11-05 17:59:11.885110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-11-05 17:59:11.889888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-11-05 17:59:11.889919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:51.975 [2024-11-05 17:59:11.889933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.735 ms 00:24:51.975 [2024-11-05 17:59:11.889943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-11-05 17:59:11.890027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-11-05 17:59:11.890038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:51.975 [2024-11-05 17:59:11.890046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:51.975 [2024-11-05 17:59:11.890054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-11-05 17:59:11.890103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-11-05 17:59:11.890113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:51.975 [2024-11-05 17:59:11.890121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:51.975 [2024-11-05 17:59:11.890131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-11-05 17:59:11.890151] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:51.975 [2024-11-05 17:59:11.891434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-11-05 17:59:11.891461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:51.975 [2024-11-05 17:59:11.891471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:24:51.975 [2024-11-05 17:59:11.891480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-11-05 17:59:11.891512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-11-05 17:59:11.891521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:51.975 [2024-11-05 17:59:11.891530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:51.975 [2024-11-05 17:59:11.891544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-11-05 17:59:11.891574] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:51.975 [2024-11-05 17:59:11.891592] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:51.975 [2024-11-05 17:59:11.891626] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:51.975 [2024-11-05 17:59:11.891642] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:51.975 [2024-11-05 17:59:11.891748] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:51.975 [2024-11-05 17:59:11.891760] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:51.975 [2024-11-05 17:59:11.891774] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:51.975 [2024-11-05 17:59:11.891785] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:51.975 [2024-11-05 17:59:11.891794] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:51.975 [2024-11-05 17:59:11.891803] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:51.975 [2024-11-05 17:59:11.891812] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:51.975 [2024-11-05 17:59:11.891819] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:51.975 [2024-11-05 17:59:11.891825] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:51.975 [2024-11-05 17:59:11.891833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-11-05 17:59:11.891842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:51.975 [2024-11-05 17:59:11.891852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:24:51.975 [2024-11-05 17:59:11.891859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-11-05 17:59:11.891945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.975 [2024-11-05 17:59:11.891953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:51.975 [2024-11-05 17:59:11.891961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:51.975 [2024-11-05 17:59:11.891968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.975 [2024-11-05 17:59:11.892076] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:51.975 [2024-11-05 17:59:11.892092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:51.975 [2024-11-05 17:59:11.892100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.975 [2024-11-05 17:59:11.892108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:51.975 [2024-11-05 17:59:11.892127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:51.975 [2024-11-05 17:59:11.892141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:51.975 [2024-11-05 17:59:11.892153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.975 [2024-11-05 17:59:11.892168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:51.975 [2024-11-05 17:59:11.892175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:51.975 [2024-11-05 17:59:11.892181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.975 [2024-11-05 17:59:11.892187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:51.975 [2024-11-05 17:59:11.892196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:51.975 [2024-11-05 17:59:11.892203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:51.975 [2024-11-05 17:59:11.892216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:51.975 [2024-11-05 17:59:11.892222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:51.975 [2024-11-05 17:59:11.892236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.975 [2024-11-05 17:59:11.892249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:51.975 [2024-11-05 17:59:11.892256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.975 [2024-11-05 17:59:11.892274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:51.975 [2024-11-05 17:59:11.892282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.975 [2024-11-05 17:59:11.892295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:51.975 [2024-11-05 17:59:11.892301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.975 [2024-11-05 17:59:11.892313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:51.975 [2024-11-05 17:59:11.892320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.975 [2024-11-05 17:59:11.892332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:51.975 [2024-11-05 17:59:11.892338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:51.975 [2024-11-05 17:59:11.892345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.975 [2024-11-05 17:59:11.892351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:51.975 [2024-11-05 17:59:11.892357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:51.975 [2024-11-05 17:59:11.892364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:51.975 [2024-11-05 17:59:11.892378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:51.975 [2024-11-05 17:59:11.892385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.975 [2024-11-05 17:59:11.892392] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:51.975 [2024-11-05 17:59:11.892404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:51.976 [2024-11-05 17:59:11.892412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.976 [2024-11-05 17:59:11.892419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.976 [2024-11-05 17:59:11.892427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:51.976 [2024-11-05 17:59:11.892434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:51.976 [2024-11-05 17:59:11.892440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:51.976 [2024-11-05 17:59:11.892447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:51.976 [2024-11-05 17:59:11.892453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:51.976 [2024-11-05 17:59:11.892459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:51.976 [2024-11-05 17:59:11.892467] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:51.976 [2024-11-05 17:59:11.892475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.976 [2024-11-05 17:59:11.892484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:51.976 [2024-11-05 17:59:11.892491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:51.976 [2024-11-05 17:59:11.892501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:51.976 [2024-11-05 17:59:11.892508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:51.976 [2024-11-05 17:59:11.892515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:51.976 [2024-11-05 17:59:11.892522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:51.976 [2024-11-05 17:59:11.892529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:51.976 [2024-11-05 17:59:11.892536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:51.976 [2024-11-05 17:59:11.892543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:51.976 [2024-11-05 17:59:11.892550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:51.976 [2024-11-05 17:59:11.892557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:51.976 [2024-11-05 17:59:11.892564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:51.976 [2024-11-05 17:59:11.892570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:51.976 [2024-11-05 17:59:11.892577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:51.976 [2024-11-05 17:59:11.892584] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:51.976 [2024-11-05 17:59:11.892592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.976 [2024-11-05 17:59:11.892602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:51.976 [2024-11-05 17:59:11.892609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:51.976 [2024-11-05 17:59:11.892619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:51.976 [2024-11-05 17:59:11.892626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:51.976 [2024-11-05 17:59:11.892634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.892641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:51.976 [2024-11-05 17:59:11.892651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:24:51.976 [2024-11-05 17:59:11.892661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.901248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.901295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.976 [2024-11-05 17:59:11.901309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.548 ms 00:24:51.976 [2024-11-05 17:59:11.901317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.901393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.901401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:51.976 [2024-11-05 17:59:11.901409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:51.976 [2024-11-05 17:59:11.901416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.926414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.926502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.976 [2024-11-05 17:59:11.926532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.950 ms 00:24:51.976 [2024-11-05 17:59:11.926554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.926670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.926701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.976 [2024-11-05 17:59:11.926732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:51.976 [2024-11-05 17:59:11.926762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.927329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.927357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.976 [2024-11-05 17:59:11.927368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:24:51.976 [2024-11-05 17:59:11.927376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.927498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.927516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.976 [2024-11-05 17:59:11.927525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:51.976 [2024-11-05 17:59:11.927533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.932430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.932460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.976 [2024-11-05 17:59:11.932474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.879 ms 00:24:51.976 [2024-11-05 17:59:11.932482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.934754] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:51.976 [2024-11-05 17:59:11.934788] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:51.976 [2024-11-05 17:59:11.934801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.934816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:51.976 [2024-11-05 17:59:11.934824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:24:51.976 [2024-11-05 17:59:11.934832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.951999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.952031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:51.976 [2024-11-05 17:59:11.952042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.086 ms 00:24:51.976 [2024-11-05 17:59:11.952051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.953522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.953551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:51.976 [2024-11-05 17:59:11.953560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:24:51.976 [2024-11-05 17:59:11.953567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.954676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.954705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:51.976 [2024-11-05 17:59:11.954714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:24:51.976 [2024-11-05 17:59:11.954720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.976 [2024-11-05 17:59:11.955039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.976 [2024-11-05 17:59:11.955060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:51.976 [2024-11-05 17:59:11.955081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:24:51.976 [2024-11-05 17:59:11.955089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.235 [2024-11-05 17:59:11.969609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.235 [2024-11-05 17:59:11.969656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:52.235 [2024-11-05 17:59:11.969668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.500 ms 00:24:52.235 [2024-11-05 17:59:11.969676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.235 [2024-11-05 17:59:11.976952] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:52.235 [2024-11-05 17:59:11.979343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.235 [2024-11-05 17:59:11.979376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:52.235 [2024-11-05 17:59:11.979387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.631 ms 00:24:52.235 [2024-11-05 17:59:11.979403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.235 [2024-11-05 17:59:11.979454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.235 [2024-11-05 17:59:11.979465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:52.235 [2024-11-05 17:59:11.979474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:52.235 [2024-11-05 17:59:11.979482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.235 [2024-11-05 17:59:11.980034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.235 [2024-11-05 17:59:11.980080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:52.235 [2024-11-05 17:59:11.980093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:24:52.235 [2024-11-05 17:59:11.980104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.235 [2024-11-05 17:59:11.980125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.235 [2024-11-05 17:59:11.980133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:52.235 [2024-11-05 17:59:11.980140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:52.235 [2024-11-05 17:59:11.980147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.235 [2024-11-05 17:59:11.980177] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:52.235 [2024-11-05 17:59:11.980186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.235 [2024-11-05 17:59:11.980194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:52.235 [2024-11-05 17:59:11.980204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:52.235 [2024-11-05 17:59:11.980214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.235 [2024-11-05 17:59:11.983189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.235 [2024-11-05 17:59:11.983222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:52.235 [2024-11-05 17:59:11.983238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.958 ms 00:24:52.235 [2024-11-05 17:59:11.983247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.235 [2024-11-05 17:59:11.983312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.235 [2024-11-05 17:59:11.983321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:52.235 [2024-11-05 17:59:11.983329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:52.235 [2024-11-05 17:59:11.983342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.235 [2024-11-05 17:59:11.984282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.530 ms, result 0 00:24:53.169  [2024-11-05T17:59:14.532Z] Copying: 45/1024 [MB] (45 MBps) [2024-11-05T17:59:15.465Z] Copying: 92/1024 [MB] (47 MBps) [2024-11-05T17:59:16.448Z] Copying: 143/1024 [MB] (50 MBps) [2024-11-05T17:59:17.382Z] Copying: 191/1024 [MB] (48 MBps) [2024-11-05T17:59:18.316Z] Copying: 238/1024 [MB] (46 MBps) [2024-11-05T17:59:19.250Z] Copying: 285/1024 [MB] (47 MBps) [2024-11-05T17:59:20.183Z] Copying: 332/1024 [MB] (46 MBps) [2024-11-05T17:59:21.557Z] Copying: 378/1024 [MB] (45 MBps) [2024-11-05T17:59:22.489Z] Copying: 428/1024 [MB] (50 MBps) [2024-11-05T17:59:23.428Z] Copying: 479/1024 [MB] (50 MBps) [2024-11-05T17:59:24.369Z] Copying: 519/1024 [MB] (40 MBps) [2024-11-05T17:59:25.312Z] Copying: 544/1024 [MB] (25 MBps) [2024-11-05T17:59:26.249Z] Copying: 561/1024 [MB] (17 MBps) [2024-11-05T17:59:27.182Z] Copying: 593/1024 [MB] (31 MBps) [2024-11-05T17:59:28.556Z] Copying: 640/1024 [MB] (46 MBps) [2024-11-05T17:59:29.490Z] Copying: 684/1024 [MB] (44 MBps) [2024-11-05T17:59:30.424Z] Copying: 730/1024 [MB] (45 MBps) [2024-11-05T17:59:31.358Z] Copying: 776/1024 [MB] (46 MBps) [2024-11-05T17:59:32.295Z] Copying: 822/1024 [MB] (46 MBps) [2024-11-05T17:59:33.303Z] Copying: 860/1024 [MB] (37 MBps) [2024-11-05T17:59:34.238Z] Copying: 888/1024 [MB] (27 MBps) [2024-11-05T17:59:35.182Z] Copying: 922/1024 [MB] (34 MBps) [2024-11-05T17:59:36.572Z] Copying: 953/1024 [MB] (30 MBps) [2024-11-05T17:59:37.514Z] Copying: 978/1024 [MB] (24 MBps) [2024-11-05T17:59:38.455Z] Copying: 998/1024 [MB] (20 MBps) [2024-11-05T17:59:38.455Z] Copying: 1021/1024 [MB] (23 MBps) [2024-11-05T17:59:38.455Z] Copying: 1024/1024 [MB] (average 39 MBps)[2024-11-05 17:59:38.209229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.209285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:18.464 [2024-11-05 17:59:38.209299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:18.464 [2024-11-05 17:59:38.209312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.209332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:18.464 [2024-11-05 17:59:38.209764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.209793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:18.464 [2024-11-05 17:59:38.209802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:25:18.464 [2024-11-05 17:59:38.209809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.210016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.210025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:18.464 [2024-11-05 17:59:38.210033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:25:18.464 [2024-11-05 17:59:38.210044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.213492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.213514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:18.464 [2024-11-05 17:59:38.213524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:25:18.464 [2024-11-05 17:59:38.213532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.220644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.220673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:18.464 [2024-11-05 17:59:38.220690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.097 ms 00:25:18.464 [2024-11-05 17:59:38.220698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.222006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.222039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:18.464 [2024-11-05 17:59:38.222048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:25:18.464 [2024-11-05 17:59:38.222055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.225462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.225493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:18.464 [2024-11-05 17:59:38.225503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:25:18.464 [2024-11-05 17:59:38.225510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.226795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.226830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:18.464 [2024-11-05 17:59:38.226840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:25:18.464 [2024-11-05 17:59:38.226847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.228640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.228669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:18.464 [2024-11-05 17:59:38.228677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.774 ms 00:25:18.464 [2024-11-05 17:59:38.228693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.229685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.229714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:18.464 [2024-11-05 17:59:38.229723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:25:18.464 [2024-11-05 17:59:38.229729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.230613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.230643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:18.464 [2024-11-05 17:59:38.230651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:25:18.464 [2024-11-05 17:59:38.230658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.231469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.464 [2024-11-05 17:59:38.231498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:18.464 [2024-11-05 17:59:38.231507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:25:18.464 [2024-11-05 17:59:38.231514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.464 [2024-11-05 17:59:38.231528] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:18.464 [2024-11-05 17:59:38.231541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:18.464 [2024-11-05 17:59:38.231550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:18.464 [2024-11-05 17:59:38.231558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.231996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:18.465 [2024-11-05 17:59:38.232223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:18.466 [2024-11-05 17:59:38.232230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:18.466 [2024-11-05 17:59:38.232238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:18.466 [2024-11-05 17:59:38.232245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:18.466 [2024-11-05 17:59:38.232253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:18.466 [2024-11-05 17:59:38.232260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:18.466 [2024-11-05 17:59:38.232268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:18.466 [2024-11-05 17:59:38.232275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:18.466 [2024-11-05 17:59:38.232282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:18.466 [2024-11-05 17:59:38.232298] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:18.466 [2024-11-05 17:59:38.232305] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e2282134-860d-4b07-9569-9c3cb89022d4 00:25:18.466 [2024-11-05 17:59:38.232313] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:18.466 [2024-11-05 17:59:38.232320] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:18.466 [2024-11-05 17:59:38.232327] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:18.466 [2024-11-05 17:59:38.232335] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:18.466 [2024-11-05 17:59:38.232342] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:18.466 [2024-11-05 17:59:38.232350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:18.466 [2024-11-05 17:59:38.232356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:18.466 [2024-11-05 17:59:38.232363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:18.466 [2024-11-05 17:59:38.232369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:18.466 [2024-11-05 17:59:38.232375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.466 [2024-11-05 17:59:38.232390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:18.466 [2024-11-05 17:59:38.232405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:25:18.466 [2024-11-05 17:59:38.232416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.233796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.466 [2024-11-05 17:59:38.233817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:18.466 [2024-11-05 17:59:38.233827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:25:18.466 [2024-11-05 17:59:38.233834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.233909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.466 [2024-11-05 17:59:38.233922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:18.466 [2024-11-05 17:59:38.233931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:18.466 [2024-11-05 17:59:38.233938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.238702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.238729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:18.466 [2024-11-05 17:59:38.238743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.238752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.238797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.238805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:18.466 [2024-11-05 17:59:38.238829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.238836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.238869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.238877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:18.466 [2024-11-05 17:59:38.238885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.238891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.238908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.238915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:18.466 [2024-11-05 17:59:38.238923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.238929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.247637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.247679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:18.466 [2024-11-05 17:59:38.247688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.247702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.256011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.256054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:18.466 [2024-11-05 17:59:38.256075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.256083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.256129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.256138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:18.466 [2024-11-05 17:59:38.256146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.256153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.256176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.256190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:18.466 [2024-11-05 17:59:38.256198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.256205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.256264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.256272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:18.466 [2024-11-05 17:59:38.256280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.256287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.256323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.256337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:18.466 [2024-11-05 17:59:38.256351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.256358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.256389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.256398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:18.466 [2024-11-05 17:59:38.256405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.256416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.256453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:18.466 [2024-11-05 17:59:38.256464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:18.466 [2024-11-05 17:59:38.256476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:18.466 [2024-11-05 17:59:38.256483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.466 [2024-11-05 17:59:38.256594] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.346 ms, result 0 00:25:18.466 00:25:18.466 00:25:18.466 17:59:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:20.369 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:20.369 17:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:20.369 17:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:20.369 17:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:20.370 17:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:20.628 17:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:20.628 17:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:20.628 17:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:20.628 17:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89779 00:25:20.628 17:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 89779 ']' 00:25:20.628 17:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 89779 00:25:20.628 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (89779) - No such process 00:25:20.628 Process with pid 89779 is not found 00:25:20.628 17:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 89779 is not found' 00:25:20.628 17:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:20.886 17:59:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:20.886 Remove shared memory files 00:25:20.886 17:59:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:20.886 17:59:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:20.886 17:59:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:20.886 17:59:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:20.886 17:59:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:20.886 17:59:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:20.887 00:25:20.887 real 2m37.922s 00:25:20.887 user 3m0.565s 00:25:20.887 sys 0m25.525s 00:25:20.887 17:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:20.887 ************************************ 00:25:20.887 END TEST ftl_dirty_shutdown 00:25:20.887 ************************************ 00:25:20.887 17:59:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:21.145 17:59:40 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:21.145 17:59:40 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:21.145 17:59:40 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:21.145 17:59:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:21.145 ************************************ 00:25:21.145 START TEST ftl_upgrade_shutdown 00:25:21.145 ************************************ 00:25:21.145 17:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:21.145 * Looking for test storage... 00:25:21.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:21.145 17:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:21.145 17:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:21.145 17:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:21.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.145 --rc genhtml_branch_coverage=1 00:25:21.145 --rc genhtml_function_coverage=1 00:25:21.145 --rc genhtml_legend=1 00:25:21.145 --rc geninfo_all_blocks=1 00:25:21.145 --rc geninfo_unexecuted_blocks=1 00:25:21.145 00:25:21.145 ' 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:21.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.145 --rc genhtml_branch_coverage=1 00:25:21.145 --rc genhtml_function_coverage=1 00:25:21.145 --rc genhtml_legend=1 00:25:21.145 --rc geninfo_all_blocks=1 00:25:21.145 --rc geninfo_unexecuted_blocks=1 00:25:21.145 00:25:21.145 ' 00:25:21.145 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:21.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.145 --rc genhtml_branch_coverage=1 00:25:21.145 --rc genhtml_function_coverage=1 00:25:21.146 --rc genhtml_legend=1 00:25:21.146 --rc geninfo_all_blocks=1 00:25:21.146 --rc geninfo_unexecuted_blocks=1 00:25:21.146 00:25:21.146 ' 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:21.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.146 --rc genhtml_branch_coverage=1 00:25:21.146 --rc genhtml_function_coverage=1 00:25:21.146 --rc genhtml_legend=1 00:25:21.146 --rc geninfo_all_blocks=1 00:25:21.146 --rc geninfo_unexecuted_blocks=1 00:25:21.146 00:25:21.146 ' 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91561 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91561 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 91561 ']' 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:21.146 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:21.146 [2024-11-05 17:59:41.124784] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:25:21.146 [2024-11-05 17:59:41.125048] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91561 ] 00:25:21.406 [2024-11-05 17:59:41.254194] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:21.406 [2024-11-05 17:59:41.285327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.406 [2024-11-05 17:59:41.304500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:21.974 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:21.975 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:21.975 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:21.975 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:21.975 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:21.975 17:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:22.233 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:22.233 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:22.233 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:22.233 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:25:22.233 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:22.233 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:25:22.233 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:25:22.233 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:22.492 { 00:25:22.492 "name": "basen1", 00:25:22.492 "aliases": [ 00:25:22.492 "2130dc9b-a794-42f4-8c29-0d0d9bae49eb" 00:25:22.492 ], 00:25:22.492 "product_name": "NVMe disk", 00:25:22.492 "block_size": 4096, 00:25:22.492 "num_blocks": 1310720, 00:25:22.492 "uuid": "2130dc9b-a794-42f4-8c29-0d0d9bae49eb", 00:25:22.492 "numa_id": -1, 00:25:22.492 "assigned_rate_limits": { 00:25:22.492 "rw_ios_per_sec": 0, 00:25:22.492 "rw_mbytes_per_sec": 0, 00:25:22.492 "r_mbytes_per_sec": 0, 00:25:22.492 "w_mbytes_per_sec": 0 00:25:22.492 }, 00:25:22.492 "claimed": true, 00:25:22.492 "claim_type": "read_many_write_one", 00:25:22.492 "zoned": false, 00:25:22.492 "supported_io_types": { 00:25:22.492 "read": true, 00:25:22.492 "write": true, 00:25:22.492 "unmap": true, 00:25:22.492 "flush": true, 00:25:22.492 "reset": true, 00:25:22.492 "nvme_admin": true, 00:25:22.492 "nvme_io": true, 00:25:22.492 "nvme_io_md": false, 00:25:22.492 "write_zeroes": true, 00:25:22.492 "zcopy": false, 00:25:22.492 "get_zone_info": false, 00:25:22.492 "zone_management": false, 00:25:22.492 "zone_append": false, 00:25:22.492 "compare": true, 00:25:22.492 "compare_and_write": false, 00:25:22.492 "abort": true, 00:25:22.492 "seek_hole": false, 00:25:22.492 "seek_data": false, 00:25:22.492 "copy": true, 00:25:22.492 "nvme_iov_md": false 00:25:22.492 }, 00:25:22.492 "driver_specific": { 00:25:22.492 "nvme": [ 00:25:22.492 { 00:25:22.492 "pci_address": "0000:00:11.0", 00:25:22.492 "trid": { 00:25:22.492 "trtype": "PCIe", 00:25:22.492 "traddr": "0000:00:11.0" 00:25:22.492 }, 00:25:22.492 "ctrlr_data": { 00:25:22.492 "cntlid": 0, 00:25:22.492 "vendor_id": "0x1b36", 00:25:22.492 "model_number": "QEMU NVMe Ctrl", 00:25:22.492 "serial_number": "12341", 00:25:22.492 "firmware_revision": "8.0.0", 00:25:22.492 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:22.492 "oacs": { 00:25:22.492 "security": 0, 00:25:22.492 "format": 1, 00:25:22.492 "firmware": 0, 00:25:22.492 "ns_manage": 1 00:25:22.492 }, 00:25:22.492 "multi_ctrlr": false, 00:25:22.492 "ana_reporting": false 00:25:22.492 }, 00:25:22.492 "vs": { 00:25:22.492 "nvme_version": "1.4" 00:25:22.492 }, 00:25:22.492 "ns_data": { 00:25:22.492 "id": 1, 00:25:22.492 "can_share": false 00:25:22.492 } 00:25:22.492 } 00:25:22.492 ], 00:25:22.492 "mp_policy": "active_passive" 00:25:22.492 } 00:25:22.492 } 00:25:22.492 ]' 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:22.492 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:22.752 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0b48809d-b1e7-4ea9-bf23-cb2429ae45dd 00:25:22.752 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:22.752 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b48809d-b1e7-4ea9-bf23-cb2429ae45dd 00:25:23.014 17:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:23.275 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=f74480dd-c6ee-4d77-ab53-5a28cdc38bcf 00:25:23.275 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u f74480dd-c6ee-4d77-ab53-5a28cdc38bcf 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f35ce328-fe9f-40b7-a0ba-e8cb36e85c52 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f35ce328-fe9f-40b7-a0ba-e8cb36e85c52 ]] 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f35ce328-fe9f-40b7-a0ba-e8cb36e85c52 5120 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f35ce328-fe9f-40b7-a0ba-e8cb36e85c52 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f35ce328-fe9f-40b7-a0ba-e8cb36e85c52 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=f35ce328-fe9f-40b7-a0ba-e8cb36e85c52 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:25:23.537 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f35ce328-fe9f-40b7-a0ba-e8cb36e85c52 00:25:23.800 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:25:23.800 { 00:25:23.800 "name": "f35ce328-fe9f-40b7-a0ba-e8cb36e85c52", 00:25:23.800 "aliases": [ 00:25:23.800 "lvs/basen1p0" 00:25:23.800 ], 00:25:23.800 "product_name": "Logical Volume", 00:25:23.800 "block_size": 4096, 00:25:23.800 "num_blocks": 5242880, 00:25:23.800 "uuid": "f35ce328-fe9f-40b7-a0ba-e8cb36e85c52", 00:25:23.800 "assigned_rate_limits": { 00:25:23.800 "rw_ios_per_sec": 0, 00:25:23.800 "rw_mbytes_per_sec": 0, 00:25:23.800 "r_mbytes_per_sec": 0, 00:25:23.800 "w_mbytes_per_sec": 0 00:25:23.800 }, 00:25:23.800 "claimed": false, 00:25:23.800 "zoned": false, 00:25:23.800 "supported_io_types": { 00:25:23.800 "read": true, 00:25:23.800 "write": true, 00:25:23.800 "unmap": true, 00:25:23.800 "flush": false, 00:25:23.800 "reset": true, 00:25:23.800 "nvme_admin": false, 00:25:23.800 "nvme_io": false, 00:25:23.800 "nvme_io_md": false, 00:25:23.800 "write_zeroes": true, 00:25:23.800 "zcopy": false, 00:25:23.800 "get_zone_info": false, 00:25:23.800 "zone_management": false, 00:25:23.800 "zone_append": false, 00:25:23.800 "compare": false, 00:25:23.800 "compare_and_write": false, 00:25:23.800 "abort": false, 00:25:23.800 "seek_hole": true, 00:25:23.800 "seek_data": true, 00:25:23.800 "copy": false, 00:25:23.800 "nvme_iov_md": false 00:25:23.800 }, 00:25:23.800 "driver_specific": { 00:25:23.800 "lvol": { 00:25:23.800 "lvol_store_uuid": "f74480dd-c6ee-4d77-ab53-5a28cdc38bcf", 00:25:23.800 "base_bdev": "basen1", 00:25:23.800 "thin_provision": true, 00:25:23.800 "num_allocated_clusters": 0, 00:25:23.800 "snapshot": false, 00:25:23.800 "clone": false, 00:25:23.800 "esnap_clone": false 00:25:23.800 } 00:25:23.800 } 00:25:23.800 } 00:25:23.800 ]' 00:25:23.800 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:25:23.800 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:25:23.800 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:25:23.800 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:25:23.800 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:25:23.800 17:59:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:25:23.800 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:23.800 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:23.800 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:24.091 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:24.091 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:24.091 17:59:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:24.351 17:59:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:24.351 17:59:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:24.351 17:59:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f35ce328-fe9f-40b7-a0ba-e8cb36e85c52 -c cachen1p0 --l2p_dram_limit 2 00:25:24.351 [2024-11-05 17:59:44.332751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.351 [2024-11-05 17:59:44.332812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:24.351 [2024-11-05 17:59:44.332830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:24.351 [2024-11-05 17:59:44.332839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.351 [2024-11-05 17:59:44.332901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.352 [2024-11-05 17:59:44.332911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:24.352 [2024-11-05 17:59:44.332926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:25:24.352 [2024-11-05 17:59:44.332934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.352 [2024-11-05 17:59:44.332956] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:24.352 [2024-11-05 17:59:44.333631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:24.352 [2024-11-05 17:59:44.333669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.352 [2024-11-05 17:59:44.333679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:24.352 [2024-11-05 17:59:44.333692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.717 ms 00:25:24.352 [2024-11-05 17:59:44.333700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.352 [2024-11-05 17:59:44.333746] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 5b175487-60f0-4f62-9149-393cee6672a5 00:25:24.352 [2024-11-05 17:59:44.335381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.352 [2024-11-05 17:59:44.335420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:24.352 [2024-11-05 17:59:44.335431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:25:24.352 [2024-11-05 17:59:44.335440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.352 [2024-11-05 17:59:44.343116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.352 [2024-11-05 17:59:44.343150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:24.352 [2024-11-05 17:59:44.343160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.629 ms 00:25:24.352 [2024-11-05 17:59:44.343172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.352 [2024-11-05 17:59:44.343257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.352 [2024-11-05 17:59:44.343269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:24.352 [2024-11-05 17:59:44.343278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:25:24.352 [2024-11-05 17:59:44.343287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.352 [2024-11-05 17:59:44.343344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.352 [2024-11-05 17:59:44.343357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:24.352 [2024-11-05 17:59:44.343365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:24.352 [2024-11-05 17:59:44.343375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.352 [2024-11-05 17:59:44.343396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:24.611 [2024-11-05 17:59:44.345284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.611 [2024-11-05 17:59:44.345315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:24.611 [2024-11-05 17:59:44.345327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.890 ms 00:25:24.611 [2024-11-05 17:59:44.345335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.611 [2024-11-05 17:59:44.345363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.611 [2024-11-05 17:59:44.345371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:24.611 [2024-11-05 17:59:44.345384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:24.611 [2024-11-05 17:59:44.345396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.611 [2024-11-05 17:59:44.345414] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:24.611 [2024-11-05 17:59:44.345554] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:24.611 [2024-11-05 17:59:44.345572] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:24.611 [2024-11-05 17:59:44.345583] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:24.611 [2024-11-05 17:59:44.345598] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:24.611 [2024-11-05 17:59:44.345607] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:24.611 [2024-11-05 17:59:44.345621] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:24.611 [2024-11-05 17:59:44.345629] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:24.611 [2024-11-05 17:59:44.345638] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:24.611 [2024-11-05 17:59:44.345645] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:24.611 [2024-11-05 17:59:44.345662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.611 [2024-11-05 17:59:44.345670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:24.611 [2024-11-05 17:59:44.345680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:25:24.611 [2024-11-05 17:59:44.345688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.611 [2024-11-05 17:59:44.345775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.611 [2024-11-05 17:59:44.345784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:24.611 [2024-11-05 17:59:44.345795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:25:24.611 [2024-11-05 17:59:44.345803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.611 [2024-11-05 17:59:44.345899] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:24.611 [2024-11-05 17:59:44.345912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:24.611 [2024-11-05 17:59:44.345922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:24.611 [2024-11-05 17:59:44.345931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.611 [2024-11-05 17:59:44.345942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:24.611 [2024-11-05 17:59:44.345950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:24.611 [2024-11-05 17:59:44.345960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:24.611 [2024-11-05 17:59:44.345968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:24.611 [2024-11-05 17:59:44.345978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:24.611 [2024-11-05 17:59:44.345985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.611 [2024-11-05 17:59:44.345996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:24.611 [2024-11-05 17:59:44.346004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:24.611 [2024-11-05 17:59:44.346016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.611 [2024-11-05 17:59:44.346024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:24.611 [2024-11-05 17:59:44.346033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:24.611 [2024-11-05 17:59:44.346040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.611 [2024-11-05 17:59:44.346050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:24.611 [2024-11-05 17:59:44.346057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:24.611 [2024-11-05 17:59:44.346080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.611 [2024-11-05 17:59:44.346089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:24.611 [2024-11-05 17:59:44.346098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:24.611 [2024-11-05 17:59:44.346105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.611 [2024-11-05 17:59:44.346115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:24.611 [2024-11-05 17:59:44.346123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:24.611 [2024-11-05 17:59:44.346132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.611 [2024-11-05 17:59:44.346141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:24.611 [2024-11-05 17:59:44.346151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:24.611 [2024-11-05 17:59:44.346158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.611 [2024-11-05 17:59:44.346170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:24.611 [2024-11-05 17:59:44.346177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:24.611 [2024-11-05 17:59:44.346186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.611 [2024-11-05 17:59:44.346194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:24.611 [2024-11-05 17:59:44.346203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:24.611 [2024-11-05 17:59:44.346210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.611 [2024-11-05 17:59:44.346221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:24.611 [2024-11-05 17:59:44.346228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:24.611 [2024-11-05 17:59:44.346240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.611 [2024-11-05 17:59:44.346248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:24.611 [2024-11-05 17:59:44.346257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:24.611 [2024-11-05 17:59:44.346265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.611 [2024-11-05 17:59:44.346274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:24.611 [2024-11-05 17:59:44.346282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:24.611 [2024-11-05 17:59:44.346290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.611 [2024-11-05 17:59:44.346298] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:24.611 [2024-11-05 17:59:44.346330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:24.611 [2024-11-05 17:59:44.346337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:24.611 [2024-11-05 17:59:44.346346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.611 [2024-11-05 17:59:44.346356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:24.611 [2024-11-05 17:59:44.346364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:24.611 [2024-11-05 17:59:44.346370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:24.611 [2024-11-05 17:59:44.346383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:24.611 [2024-11-05 17:59:44.346390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:24.611 [2024-11-05 17:59:44.346398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:24.611 [2024-11-05 17:59:44.346408] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:24.611 [2024-11-05 17:59:44.346420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.611 [2024-11-05 17:59:44.346429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:24.611 [2024-11-05 17:59:44.346438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:24.611 [2024-11-05 17:59:44.346446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:24.611 [2024-11-05 17:59:44.346455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:24.611 [2024-11-05 17:59:44.346462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:24.611 [2024-11-05 17:59:44.346474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:24.611 [2024-11-05 17:59:44.346481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:24.611 [2024-11-05 17:59:44.346490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:24.611 [2024-11-05 17:59:44.346497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:24.611 [2024-11-05 17:59:44.346505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:24.612 [2024-11-05 17:59:44.346513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:24.612 [2024-11-05 17:59:44.346521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:24.612 [2024-11-05 17:59:44.346529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:24.612 [2024-11-05 17:59:44.346537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:24.612 [2024-11-05 17:59:44.346544] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:24.612 [2024-11-05 17:59:44.346554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.612 [2024-11-05 17:59:44.346586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:24.612 [2024-11-05 17:59:44.346595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:24.612 [2024-11-05 17:59:44.346603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:24.612 [2024-11-05 17:59:44.346611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:24.612 [2024-11-05 17:59:44.346619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.612 [2024-11-05 17:59:44.346629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:24.612 [2024-11-05 17:59:44.346637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.788 ms 00:25:24.612 [2024-11-05 17:59:44.346646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.612 [2024-11-05 17:59:44.346683] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:24.612 [2024-11-05 17:59:44.346694] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:27.137 [2024-11-05 17:59:47.094624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.137 [2024-11-05 17:59:47.094707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:27.137 [2024-11-05 17:59:47.094724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2747.929 ms 00:25:27.137 [2024-11-05 17:59:47.094736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.137 [2024-11-05 17:59:47.105546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.137 [2024-11-05 17:59:47.105598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:27.137 [2024-11-05 17:59:47.105618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.707 ms 00:25:27.137 [2024-11-05 17:59:47.105635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.137 [2024-11-05 17:59:47.105694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.137 [2024-11-05 17:59:47.105707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:27.137 [2024-11-05 17:59:47.105718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:25:27.137 [2024-11-05 17:59:47.105732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.137 [2024-11-05 17:59:47.116635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.137 [2024-11-05 17:59:47.116849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:27.137 [2024-11-05 17:59:47.116871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.862 ms 00:25:27.137 [2024-11-05 17:59:47.116881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.137 [2024-11-05 17:59:47.116916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.137 [2024-11-05 17:59:47.116930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:27.137 [2024-11-05 17:59:47.116942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:27.137 [2024-11-05 17:59:47.116951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.137 [2024-11-05 17:59:47.117412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.137 [2024-11-05 17:59:47.117438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:27.137 [2024-11-05 17:59:47.117451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.415 ms 00:25:27.137 [2024-11-05 17:59:47.117473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.137 [2024-11-05 17:59:47.117515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.137 [2024-11-05 17:59:47.117527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:27.137 [2024-11-05 17:59:47.117536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:25:27.137 [2024-11-05 17:59:47.117546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.137 [2024-11-05 17:59:47.124615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.137 [2024-11-05 17:59:47.124653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:27.137 [2024-11-05 17:59:47.124662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.049 ms 00:25:27.137 [2024-11-05 17:59:47.124672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.394 [2024-11-05 17:59:47.133722] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:27.394 [2024-11-05 17:59:47.134771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.134799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:27.395 [2024-11-05 17:59:47.134819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.034 ms 00:25:27.395 [2024-11-05 17:59:47.134828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.166952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.166997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:27.395 [2024-11-05 17:59:47.167015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.095 ms 00:25:27.395 [2024-11-05 17:59:47.167024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.167117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.167144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:27.395 [2024-11-05 17:59:47.167156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:25:27.395 [2024-11-05 17:59:47.167164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.169565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.169600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:27.395 [2024-11-05 17:59:47.169615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.366 ms 00:25:27.395 [2024-11-05 17:59:47.169623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.172725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.172755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:27.395 [2024-11-05 17:59:47.172767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.077 ms 00:25:27.395 [2024-11-05 17:59:47.172774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.173056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.173079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:27.395 [2024-11-05 17:59:47.173097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:25:27.395 [2024-11-05 17:59:47.173108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.206231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.206384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:27.395 [2024-11-05 17:59:47.206405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.101 ms 00:25:27.395 [2024-11-05 17:59:47.206414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.210390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.210425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:27.395 [2024-11-05 17:59:47.210437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.928 ms 00:25:27.395 [2024-11-05 17:59:47.210445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.213751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.213863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:27.395 [2024-11-05 17:59:47.213881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.269 ms 00:25:27.395 [2024-11-05 17:59:47.213889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.218010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.218119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:27.395 [2024-11-05 17:59:47.218182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.087 ms 00:25:27.395 [2024-11-05 17:59:47.218205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.218300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.218349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:27.395 [2024-11-05 17:59:47.218373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:27.395 [2024-11-05 17:59:47.218427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.218513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.395 [2024-11-05 17:59:47.218550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:27.395 [2024-11-05 17:59:47.218634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:25:27.395 [2024-11-05 17:59:47.218657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.395 [2024-11-05 17:59:47.219736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2886.543 ms, result 0 00:25:27.395 { 00:25:27.395 "name": "ftl", 00:25:27.395 "uuid": "5b175487-60f0-4f62-9149-393cee6672a5" 00:25:27.395 } 00:25:27.395 17:59:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:27.653 [2024-11-05 17:59:47.445423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.653 17:59:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:27.911 17:59:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:27.911 [2024-11-05 17:59:47.857821] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:27.911 17:59:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:28.171 [2024-11-05 17:59:48.062209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:28.171 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:28.431 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:28.431 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:28.431 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:28.431 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:28.431 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:28.431 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:28.432 Fill FTL, iteration 1 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:28.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91672 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91672 /var/tmp/spdk.tgt.sock 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 91672 ']' 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:28.432 17:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:28.693 [2024-11-05 17:59:48.481423] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:25:28.693 [2024-11-05 17:59:48.481706] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91672 ] 00:25:28.693 [2024-11-05 17:59:48.611101] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:28.693 [2024-11-05 17:59:48.641961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.693 [2024-11-05 17:59:48.662295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.630 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:29.630 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:25:29.630 17:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:29.630 ftln1 00:25:29.630 17:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:29.630 17:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91672 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 91672 ']' 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 91672 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91672 00:25:29.888 killing process with pid 91672 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91672' 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 91672 00:25:29.888 17:59:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 91672 00:25:30.147 17:59:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:30.147 17:59:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:30.147 [2024-11-05 17:59:50.134692] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:25:30.147 [2024-11-05 17:59:50.134831] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91707 ] 00:25:30.410 [2024-11-05 17:59:50.264350] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:30.410 [2024-11-05 17:59:50.296299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.410 [2024-11-05 17:59:50.315872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.785  [2024-11-05T17:59:52.709Z] Copying: 224/1024 [MB] (224 MBps) [2024-11-05T17:59:53.645Z] Copying: 453/1024 [MB] (229 MBps) [2024-11-05T17:59:54.589Z] Copying: 732/1024 [MB] (279 MBps) [2024-11-05T17:59:54.850Z] Copying: 964/1024 [MB] (232 MBps) [2024-11-05T17:59:55.110Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:25:35.119 00:25:35.119 Calculate MD5 checksum, iteration 1 00:25:35.119 17:59:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:35.119 17:59:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:35.119 17:59:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:35.119 17:59:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:35.119 17:59:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:35.119 17:59:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:35.119 17:59:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:35.120 17:59:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:35.120 [2024-11-05 17:59:55.017798] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:25:35.120 [2024-11-05 17:59:55.017916] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91755 ] 00:25:35.417 [2024-11-05 17:59:55.146528] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:35.417 [2024-11-05 17:59:55.172864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.417 [2024-11-05 17:59:55.201457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.802  [2024-11-05T17:59:57.051Z] Copying: 740/1024 [MB] (740 MBps) [2024-11-05T17:59:57.051Z] Copying: 1024/1024 [MB] (average 708 MBps) 00:25:37.060 00:25:37.060 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:37.060 17:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:39.590 17:59:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:39.590 Fill FTL, iteration 2 00:25:39.590 17:59:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=cbe1dce3cd7ad452c880246204737006 00:25:39.590 17:59:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:39.590 17:59:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:39.590 17:59:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:39.590 17:59:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:39.590 17:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:39.590 17:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:39.590 17:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:39.590 17:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:39.590 17:59:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:39.590 [2024-11-05 17:59:59.263028] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:25:39.590 [2024-11-05 17:59:59.263200] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91805 ] 00:25:39.590 [2024-11-05 17:59:59.411561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:39.590 [2024-11-05 17:59:59.440257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.590 [2024-11-05 17:59:59.465102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.002  [2024-11-05T18:00:01.926Z] Copying: 217/1024 [MB] (217 MBps) [2024-11-05T18:00:02.858Z] Copying: 457/1024 [MB] (240 MBps) [2024-11-05T18:00:03.791Z] Copying: 731/1024 [MB] (274 MBps) [2024-11-05T18:00:03.791Z] Copying: 1019/1024 [MB] (288 MBps) [2024-11-05T18:00:04.049Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:25:44.058 00:25:44.058 Calculate MD5 checksum, iteration 2 00:25:44.058 18:00:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:44.058 18:00:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:44.058 18:00:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:44.058 18:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:44.058 18:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:44.058 18:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:44.058 18:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:44.058 18:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:44.058 [2024-11-05 18:00:03.893797] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:25:44.058 [2024-11-05 18:00:03.893899] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91854 ] 00:25:44.058 [2024-11-05 18:00:04.017132] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:44.058 [2024-11-05 18:00:04.047882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.315 [2024-11-05 18:00:04.068642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.687  [2024-11-05T18:00:06.243Z] Copying: 616/1024 [MB] (616 MBps) [2024-11-05T18:00:07.177Z] Copying: 1024/1024 [MB] (average 617 MBps) 00:25:47.186 00:25:47.186 18:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:47.186 18:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:49.084 18:00:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:49.084 18:00:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b05d1bc4b5b0345f8d5efe834e617983 00:25:49.085 18:00:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:49.085 18:00:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:49.085 18:00:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:49.342 [2024-11-05 18:00:09.147515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.342 [2024-11-05 18:00:09.147782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:49.342 [2024-11-05 18:00:09.147849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:49.342 [2024-11-05 18:00:09.147875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.342 [2024-11-05 18:00:09.147916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.342 [2024-11-05 18:00:09.147934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:49.343 [2024-11-05 18:00:09.147950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:49.343 [2024-11-05 18:00:09.147966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.343 [2024-11-05 18:00:09.147992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.343 [2024-11-05 18:00:09.148009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:49.343 [2024-11-05 18:00:09.148029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:49.343 [2024-11-05 18:00:09.148090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.343 [2024-11-05 18:00:09.148184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.656 ms, result 0 00:25:49.343 true 00:25:49.343 18:00:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:49.343 { 00:25:49.343 "name": "ftl", 00:25:49.343 "properties": [ 00:25:49.343 { 00:25:49.343 "name": "superblock_version", 00:25:49.343 "value": 5, 00:25:49.343 "read-only": true 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "name": "base_device", 00:25:49.343 "bands": [ 00:25:49.343 { 00:25:49.343 "id": 0, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 1, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 2, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 3, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 4, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 5, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 6, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 7, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 8, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 9, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 10, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 11, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 12, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 13, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 14, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 15, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 16, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 17, 00:25:49.343 "state": "FREE", 00:25:49.343 "validity": 0.0 00:25:49.343 } 00:25:49.343 ], 00:25:49.343 "read-only": true 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "name": "cache_device", 00:25:49.343 "type": "bdev", 00:25:49.343 "chunks": [ 00:25:49.343 { 00:25:49.343 "id": 0, 00:25:49.343 "state": "INACTIVE", 00:25:49.343 "utilization": 0.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 1, 00:25:49.343 "state": "CLOSED", 00:25:49.343 "utilization": 1.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 2, 00:25:49.343 "state": "CLOSED", 00:25:49.343 "utilization": 1.0 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 3, 00:25:49.343 "state": "OPEN", 00:25:49.343 "utilization": 0.001953125 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "id": 4, 00:25:49.343 "state": "OPEN", 00:25:49.343 "utilization": 0.0 00:25:49.343 } 00:25:49.343 ], 00:25:49.343 "read-only": true 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "name": "verbose_mode", 00:25:49.343 "value": true, 00:25:49.343 "unit": "", 00:25:49.343 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:49.343 }, 00:25:49.343 { 00:25:49.343 "name": "prep_upgrade_on_shutdown", 00:25:49.343 "value": false, 00:25:49.343 "unit": "", 00:25:49.343 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:49.343 } 00:25:49.343 ] 00:25:49.343 } 00:25:49.343 18:00:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:49.602 [2024-11-05 18:00:09.507735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.602 [2024-11-05 18:00:09.507942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:49.602 [2024-11-05 18:00:09.507958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:49.602 [2024-11-05 18:00:09.507965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.602 [2024-11-05 18:00:09.507988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.602 [2024-11-05 18:00:09.507995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:49.602 [2024-11-05 18:00:09.508002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:49.602 [2024-11-05 18:00:09.508008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.602 [2024-11-05 18:00:09.508023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.602 [2024-11-05 18:00:09.508029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:49.602 [2024-11-05 18:00:09.508036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:49.602 [2024-11-05 18:00:09.508042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.602 [2024-11-05 18:00:09.508107] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.347 ms, result 0 00:25:49.602 true 00:25:49.602 18:00:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:49.602 18:00:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:49.602 18:00:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:49.859 18:00:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:49.859 18:00:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:49.859 18:00:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:50.116 [2024-11-05 18:00:09.887562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.116 [2024-11-05 18:00:09.887620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:50.116 [2024-11-05 18:00:09.887632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:50.116 [2024-11-05 18:00:09.887639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.116 [2024-11-05 18:00:09.887657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.116 [2024-11-05 18:00:09.887664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:50.116 [2024-11-05 18:00:09.887670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:50.116 [2024-11-05 18:00:09.887677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.116 [2024-11-05 18:00:09.887692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.116 [2024-11-05 18:00:09.887698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:50.116 [2024-11-05 18:00:09.887704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:50.116 [2024-11-05 18:00:09.887710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.116 [2024-11-05 18:00:09.887760] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.188 ms, result 0 00:25:50.116 true 00:25:50.116 18:00:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:50.116 { 00:25:50.116 "name": "ftl", 00:25:50.116 "properties": [ 00:25:50.116 { 00:25:50.116 "name": "superblock_version", 00:25:50.116 "value": 5, 00:25:50.116 "read-only": true 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "name": "base_device", 00:25:50.116 "bands": [ 00:25:50.116 { 00:25:50.116 "id": 0, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 1, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 2, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 3, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 4, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 5, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 6, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 7, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 8, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 9, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 10, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 11, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 12, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 13, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 14, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 15, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 16, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 17, 00:25:50.116 "state": "FREE", 00:25:50.116 "validity": 0.0 00:25:50.116 } 00:25:50.116 ], 00:25:50.116 "read-only": true 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "name": "cache_device", 00:25:50.116 "type": "bdev", 00:25:50.116 "chunks": [ 00:25:50.116 { 00:25:50.116 "id": 0, 00:25:50.116 "state": "INACTIVE", 00:25:50.116 "utilization": 0.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 1, 00:25:50.116 "state": "CLOSED", 00:25:50.116 "utilization": 1.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 2, 00:25:50.116 "state": "CLOSED", 00:25:50.116 "utilization": 1.0 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 3, 00:25:50.116 "state": "OPEN", 00:25:50.116 "utilization": 0.001953125 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "id": 4, 00:25:50.116 "state": "OPEN", 00:25:50.116 "utilization": 0.0 00:25:50.116 } 00:25:50.116 ], 00:25:50.116 "read-only": true 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "name": "verbose_mode", 00:25:50.116 "value": true, 00:25:50.116 "unit": "", 00:25:50.116 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:50.116 }, 00:25:50.116 { 00:25:50.116 "name": "prep_upgrade_on_shutdown", 00:25:50.116 "value": true, 00:25:50.116 "unit": "", 00:25:50.116 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:50.116 } 00:25:50.116 ] 00:25:50.116 } 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91561 ]] 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91561 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 91561 ']' 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 91561 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91561 00:25:50.116 killing process with pid 91561 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91561' 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 91561 00:25:50.116 18:00:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 91561 00:25:50.374 [2024-11-05 18:00:10.210960] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:50.374 [2024-11-05 18:00:10.215439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.374 [2024-11-05 18:00:10.215476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:50.374 [2024-11-05 18:00:10.215488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:50.374 [2024-11-05 18:00:10.215495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.374 [2024-11-05 18:00:10.215514] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:50.374 [2024-11-05 18:00:10.216028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.374 [2024-11-05 18:00:10.216052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:50.374 [2024-11-05 18:00:10.216060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.502 ms 00:25:50.374 [2024-11-05 18:00:10.216077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.351 [2024-11-05 18:00:18.833246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.351 [2024-11-05 18:00:18.833321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:00.351 [2024-11-05 18:00:18.833335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8617.116 ms 00:26:00.351 [2024-11-05 18:00:18.833343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.351 [2024-11-05 18:00:18.834351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.351 [2024-11-05 18:00:18.834372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:00.351 [2024-11-05 18:00:18.834382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.994 ms 00:26:00.351 [2024-11-05 18:00:18.834389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.351 [2024-11-05 18:00:18.835294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.351 [2024-11-05 18:00:18.835316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:00.351 [2024-11-05 18:00:18.835324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.878 ms 00:26:00.351 [2024-11-05 18:00:18.835331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.351 [2024-11-05 18:00:18.837363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.351 [2024-11-05 18:00:18.837395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:00.351 [2024-11-05 18:00:18.837403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.994 ms 00:26:00.351 [2024-11-05 18:00:18.837410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.351 [2024-11-05 18:00:18.840389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.351 [2024-11-05 18:00:18.840558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:00.351 [2024-11-05 18:00:18.840573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.952 ms 00:26:00.351 [2024-11-05 18:00:18.840585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.351 [2024-11-05 18:00:18.840634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.351 [2024-11-05 18:00:18.840641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:00.351 [2024-11-05 18:00:18.840652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:00.351 [2024-11-05 18:00:18.840659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.351 [2024-11-05 18:00:18.842388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.351 [2024-11-05 18:00:18.842419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:00.351 [2024-11-05 18:00:18.842429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.715 ms 00:26:00.351 [2024-11-05 18:00:18.842444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.351 [2024-11-05 18:00:18.844515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.351 [2024-11-05 18:00:18.844545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:00.351 [2024-11-05 18:00:18.844553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.041 ms 00:26:00.351 [2024-11-05 18:00:18.844559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.351 [2024-11-05 18:00:18.845779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.352 [2024-11-05 18:00:18.845807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:00.352 [2024-11-05 18:00:18.845814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.193 ms 00:26:00.352 [2024-11-05 18:00:18.845819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.847010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.352 [2024-11-05 18:00:18.847040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:00.352 [2024-11-05 18:00:18.847047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.142 ms 00:26:00.352 [2024-11-05 18:00:18.847053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.847087] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:00.352 [2024-11-05 18:00:18.847099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:00.352 [2024-11-05 18:00:18.847110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:00.352 [2024-11-05 18:00:18.847117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:00.352 [2024-11-05 18:00:18.847124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:00.352 [2024-11-05 18:00:18.847223] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:00.352 [2024-11-05 18:00:18.847229] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5b175487-60f0-4f62-9149-393cee6672a5 00:26:00.352 [2024-11-05 18:00:18.847236] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:00.352 [2024-11-05 18:00:18.847246] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:00.352 [2024-11-05 18:00:18.847253] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:00.352 [2024-11-05 18:00:18.847259] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:00.352 [2024-11-05 18:00:18.847270] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:00.352 [2024-11-05 18:00:18.847277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:00.352 [2024-11-05 18:00:18.847283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:00.352 [2024-11-05 18:00:18.847288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:00.352 [2024-11-05 18:00:18.847294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:00.352 [2024-11-05 18:00:18.847299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.352 [2024-11-05 18:00:18.847306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:00.352 [2024-11-05 18:00:18.847313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:26:00.352 [2024-11-05 18:00:18.847319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.849091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.352 [2024-11-05 18:00:18.849112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:00.352 [2024-11-05 18:00:18.849121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.759 ms 00:26:00.352 [2024-11-05 18:00:18.849129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.849219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.352 [2024-11-05 18:00:18.849228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:00.352 [2024-11-05 18:00:18.849236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:26:00.352 [2024-11-05 18:00:18.849243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.855365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.855532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:00.352 [2024-11-05 18:00:18.855546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.855553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.855579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.855586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:00.352 [2024-11-05 18:00:18.855594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.855600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.855668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.855677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:00.352 [2024-11-05 18:00:18.855684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.855691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.855704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.855718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:00.352 [2024-11-05 18:00:18.855725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.855731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.867300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.867336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:00.352 [2024-11-05 18:00:18.867345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.867352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.875984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.876018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:00.352 [2024-11-05 18:00:18.876027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.876033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.876127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.876142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:00.352 [2024-11-05 18:00:18.876150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.876156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.876183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.876191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:00.352 [2024-11-05 18:00:18.876197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.876203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.876263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.876271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:00.352 [2024-11-05 18:00:18.876279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.876286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.876310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.876317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:00.352 [2024-11-05 18:00:18.876324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.876330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.876366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.876373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:00.352 [2024-11-05 18:00:18.876383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.876389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.876433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.352 [2024-11-05 18:00:18.876441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:00.352 [2024-11-05 18:00:18.876447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.352 [2024-11-05 18:00:18.876453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.352 [2024-11-05 18:00:18.876565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8661.068 ms, result 0 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92042 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92042 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 92042 ']' 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:04.543 18:00:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:04.543 [2024-11-05 18:00:23.844374] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:26:04.543 [2024-11-05 18:00:23.844494] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92042 ] 00:26:04.543 [2024-11-05 18:00:23.975139] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:04.543 [2024-11-05 18:00:23.998668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.543 [2024-11-05 18:00:24.022451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.543 [2024-11-05 18:00:24.320076] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:04.543 [2024-11-05 18:00:24.320134] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:04.543 [2024-11-05 18:00:24.466626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.543 [2024-11-05 18:00:24.466675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:04.543 [2024-11-05 18:00:24.466691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:04.543 [2024-11-05 18:00:24.466701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.543 [2024-11-05 18:00:24.466749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.543 [2024-11-05 18:00:24.466760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:04.543 [2024-11-05 18:00:24.466769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:04.543 [2024-11-05 18:00:24.466778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.543 [2024-11-05 18:00:24.466806] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:04.543 [2024-11-05 18:00:24.467263] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:04.543 [2024-11-05 18:00:24.467288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.543 [2024-11-05 18:00:24.467296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:04.543 [2024-11-05 18:00:24.467304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.495 ms 00:26:04.543 [2024-11-05 18:00:24.467315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.543 [2024-11-05 18:00:24.468650] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:04.543 [2024-11-05 18:00:24.471529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.543 [2024-11-05 18:00:24.471562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:04.543 [2024-11-05 18:00:24.471571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.880 ms 00:26:04.544 [2024-11-05 18:00:24.471578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.544 [2024-11-05 18:00:24.471636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.544 [2024-11-05 18:00:24.471645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:04.544 [2024-11-05 18:00:24.471652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:04.544 [2024-11-05 18:00:24.471658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.544 [2024-11-05 18:00:24.478113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.544 [2024-11-05 18:00:24.478138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:04.544 [2024-11-05 18:00:24.478147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.412 ms 00:26:04.544 [2024-11-05 18:00:24.478153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.544 [2024-11-05 18:00:24.478192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.544 [2024-11-05 18:00:24.478199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:04.544 [2024-11-05 18:00:24.478209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:04.544 [2024-11-05 18:00:24.478215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.544 [2024-11-05 18:00:24.478256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.544 [2024-11-05 18:00:24.478266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:04.544 [2024-11-05 18:00:24.478273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:04.544 [2024-11-05 18:00:24.478282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.544 [2024-11-05 18:00:24.478302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:04.544 [2024-11-05 18:00:24.479871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.544 [2024-11-05 18:00:24.479893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:04.544 [2024-11-05 18:00:24.479902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.576 ms 00:26:04.544 [2024-11-05 18:00:24.479910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.544 [2024-11-05 18:00:24.479941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.544 [2024-11-05 18:00:24.479950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:04.544 [2024-11-05 18:00:24.479958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:04.544 [2024-11-05 18:00:24.479966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.544 [2024-11-05 18:00:24.479986] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:04.544 [2024-11-05 18:00:24.480006] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:04.544 [2024-11-05 18:00:24.480036] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:04.544 [2024-11-05 18:00:24.480051] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:04.544 [2024-11-05 18:00:24.480159] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:04.544 [2024-11-05 18:00:24.480170] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:04.544 [2024-11-05 18:00:24.480180] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:04.544 [2024-11-05 18:00:24.480189] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:04.544 [2024-11-05 18:00:24.480197] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:04.544 [2024-11-05 18:00:24.480204] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:04.544 [2024-11-05 18:00:24.480210] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:04.544 [2024-11-05 18:00:24.480216] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:04.544 [2024-11-05 18:00:24.480223] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:04.544 [2024-11-05 18:00:24.480229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.544 [2024-11-05 18:00:24.480237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:04.544 [2024-11-05 18:00:24.480249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.245 ms 00:26:04.544 [2024-11-05 18:00:24.480255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.544 [2024-11-05 18:00:24.480333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.544 [2024-11-05 18:00:24.480344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:04.544 [2024-11-05 18:00:24.480352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:26:04.544 [2024-11-05 18:00:24.480358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.544 [2024-11-05 18:00:24.480437] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:04.544 [2024-11-05 18:00:24.480445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:04.544 [2024-11-05 18:00:24.480455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:04.544 [2024-11-05 18:00:24.480462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:04.544 [2024-11-05 18:00:24.480474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:04.544 [2024-11-05 18:00:24.480485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:04.544 [2024-11-05 18:00:24.480492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:04.544 [2024-11-05 18:00:24.480498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:04.544 [2024-11-05 18:00:24.480509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:04.544 [2024-11-05 18:00:24.480515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:04.544 [2024-11-05 18:00:24.480530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:04.544 [2024-11-05 18:00:24.480536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:04.544 [2024-11-05 18:00:24.480546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:04.544 [2024-11-05 18:00:24.480558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:04.544 [2024-11-05 18:00:24.480569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:04.544 [2024-11-05 18:00:24.480574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.544 [2024-11-05 18:00:24.480579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:04.544 [2024-11-05 18:00:24.480584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:04.544 [2024-11-05 18:00:24.480589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.544 [2024-11-05 18:00:24.480595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:04.544 [2024-11-05 18:00:24.480599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:04.544 [2024-11-05 18:00:24.480604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.544 [2024-11-05 18:00:24.480609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:04.544 [2024-11-05 18:00:24.480620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:04.544 [2024-11-05 18:00:24.480627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.544 [2024-11-05 18:00:24.480634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:04.544 [2024-11-05 18:00:24.480640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:04.544 [2024-11-05 18:00:24.480645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:04.544 [2024-11-05 18:00:24.480656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:04.544 [2024-11-05 18:00:24.480661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:04.544 [2024-11-05 18:00:24.480671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:04.544 [2024-11-05 18:00:24.480687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:04.544 [2024-11-05 18:00:24.480692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480698] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:04.544 [2024-11-05 18:00:24.480705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:04.544 [2024-11-05 18:00:24.480711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:04.544 [2024-11-05 18:00:24.480719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.544 [2024-11-05 18:00:24.480725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:04.544 [2024-11-05 18:00:24.480731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:04.544 [2024-11-05 18:00:24.480736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:04.544 [2024-11-05 18:00:24.480741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:04.544 [2024-11-05 18:00:24.480745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:04.544 [2024-11-05 18:00:24.480751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:04.544 [2024-11-05 18:00:24.480757] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:04.544 [2024-11-05 18:00:24.480765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:04.544 [2024-11-05 18:00:24.480772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:04.544 [2024-11-05 18:00:24.480777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:04.544 [2024-11-05 18:00:24.480782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:04.544 [2024-11-05 18:00:24.480788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:04.545 [2024-11-05 18:00:24.480794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:04.545 [2024-11-05 18:00:24.480800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:04.545 [2024-11-05 18:00:24.480807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:04.545 [2024-11-05 18:00:24.480818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:04.545 [2024-11-05 18:00:24.480824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:04.545 [2024-11-05 18:00:24.480829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:04.545 [2024-11-05 18:00:24.480836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:04.545 [2024-11-05 18:00:24.480841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:04.545 [2024-11-05 18:00:24.480847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:04.545 [2024-11-05 18:00:24.480852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:04.545 [2024-11-05 18:00:24.480858] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:04.545 [2024-11-05 18:00:24.480865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:04.545 [2024-11-05 18:00:24.480871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:04.545 [2024-11-05 18:00:24.480878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:04.545 [2024-11-05 18:00:24.480883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:04.545 [2024-11-05 18:00:24.480888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:04.545 [2024-11-05 18:00:24.480895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.545 [2024-11-05 18:00:24.480903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:04.545 [2024-11-05 18:00:24.480909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.511 ms 00:26:04.545 [2024-11-05 18:00:24.480917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.545 [2024-11-05 18:00:24.480953] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:04.545 [2024-11-05 18:00:24.480961] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:09.831 [2024-11-05 18:00:29.009682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.009738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:09.831 [2024-11-05 18:00:29.009753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4528.713 ms 00:26:09.831 [2024-11-05 18:00:29.009775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.017720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.017756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:09.831 [2024-11-05 18:00:29.017768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.868 ms 00:26:09.831 [2024-11-05 18:00:29.017776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.017824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.017833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:09.831 [2024-11-05 18:00:29.017841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:09.831 [2024-11-05 18:00:29.017853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.026261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.026291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:09.831 [2024-11-05 18:00:29.026301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.359 ms 00:26:09.831 [2024-11-05 18:00:29.026308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.026340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.026351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:09.831 [2024-11-05 18:00:29.026361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:09.831 [2024-11-05 18:00:29.026368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.026693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.026708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:09.831 [2024-11-05 18:00:29.026718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.292 ms 00:26:09.831 [2024-11-05 18:00:29.026726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.026772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.026782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:09.831 [2024-11-05 18:00:29.026791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:09.831 [2024-11-05 18:00:29.026851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.032091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.032114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:09.831 [2024-11-05 18:00:29.032122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.188 ms 00:26:09.831 [2024-11-05 18:00:29.032130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.034567] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:09.831 [2024-11-05 18:00:29.034597] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:09.831 [2024-11-05 18:00:29.034615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.034623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:09.831 [2024-11-05 18:00:29.034631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.389 ms 00:26:09.831 [2024-11-05 18:00:29.034638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.038544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.038581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:09.831 [2024-11-05 18:00:29.038591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.872 ms 00:26:09.831 [2024-11-05 18:00:29.038600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.040421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.040448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:09.831 [2024-11-05 18:00:29.040457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.782 ms 00:26:09.831 [2024-11-05 18:00:29.040464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.042144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.042167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:09.831 [2024-11-05 18:00:29.042176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.648 ms 00:26:09.831 [2024-11-05 18:00:29.042183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.042517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.042534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:09.831 [2024-11-05 18:00:29.042543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:26:09.831 [2024-11-05 18:00:29.042550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.831 [2024-11-05 18:00:29.067887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.831 [2024-11-05 18:00:29.067933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:09.832 [2024-11-05 18:00:29.067947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.316 ms 00:26:09.832 [2024-11-05 18:00:29.067955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.832 [2024-11-05 18:00:29.075403] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:09.832 [2024-11-05 18:00:29.076090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.832 [2024-11-05 18:00:29.076115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:09.832 [2024-11-05 18:00:29.076124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.091 ms 00:26:09.832 [2024-11-05 18:00:29.076135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.832 [2024-11-05 18:00:29.076206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.832 [2024-11-05 18:00:29.076216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:09.832 [2024-11-05 18:00:29.076225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:09.832 [2024-11-05 18:00:29.076233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.832 [2024-11-05 18:00:29.076284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.832 [2024-11-05 18:00:29.076296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:09.832 [2024-11-05 18:00:29.076308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:09.832 [2024-11-05 18:00:29.076315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.832 [2024-11-05 18:00:29.076341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.832 [2024-11-05 18:00:29.076349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:09.832 [2024-11-05 18:00:29.076356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:09.832 [2024-11-05 18:00:29.076367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.832 [2024-11-05 18:00:29.076399] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:09.832 [2024-11-05 18:00:29.076409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.832 [2024-11-05 18:00:29.076416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:09.832 [2024-11-05 18:00:29.076424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:09.832 [2024-11-05 18:00:29.076439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.832 [2024-11-05 18:00:29.079929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.832 [2024-11-05 18:00:29.079959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:09.832 [2024-11-05 18:00:29.079969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.473 ms 00:26:09.832 [2024-11-05 18:00:29.079977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.832 [2024-11-05 18:00:29.080046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.832 [2024-11-05 18:00:29.080055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:09.832 [2024-11-05 18:00:29.080075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:09.832 [2024-11-05 18:00:29.080086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.832 [2024-11-05 18:00:29.080963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4613.954 ms, result 0 00:26:09.832 [2024-11-05 18:00:29.096533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.832 [2024-11-05 18:00:29.112545] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:09.832 [2024-11-05 18:00:29.120657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:09.832 18:00:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:09.832 18:00:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:26:09.832 18:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:09.832 18:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:09.832 18:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:10.090 [2024-11-05 18:00:30.001439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.090 [2024-11-05 18:00:30.001486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:10.090 [2024-11-05 18:00:30.001497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:10.090 [2024-11-05 18:00:30.001506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.090 [2024-11-05 18:00:30.001524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.090 [2024-11-05 18:00:30.001533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:10.091 [2024-11-05 18:00:30.001541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:10.091 [2024-11-05 18:00:30.001548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.091 [2024-11-05 18:00:30.001562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.091 [2024-11-05 18:00:30.001569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:10.091 [2024-11-05 18:00:30.001575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:10.091 [2024-11-05 18:00:30.001580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.091 [2024-11-05 18:00:30.001630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.178 ms, result 0 00:26:10.091 true 00:26:10.091 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:10.348 { 00:26:10.348 "name": "ftl", 00:26:10.348 "properties": [ 00:26:10.348 { 00:26:10.348 "name": "superblock_version", 00:26:10.348 "value": 5, 00:26:10.348 "read-only": true 00:26:10.348 }, 00:26:10.348 { 00:26:10.348 "name": "base_device", 00:26:10.348 "bands": [ 00:26:10.348 { 00:26:10.348 "id": 0, 00:26:10.348 "state": "CLOSED", 00:26:10.348 "validity": 1.0 00:26:10.348 }, 00:26:10.348 { 00:26:10.348 "id": 1, 00:26:10.348 "state": "CLOSED", 00:26:10.348 "validity": 1.0 00:26:10.348 }, 00:26:10.348 { 00:26:10.348 "id": 2, 00:26:10.348 "state": "CLOSED", 00:26:10.348 "validity": 0.007843137254901933 00:26:10.348 }, 00:26:10.348 { 00:26:10.348 "id": 3, 00:26:10.348 "state": "FREE", 00:26:10.348 "validity": 0.0 00:26:10.348 }, 00:26:10.348 { 00:26:10.348 "id": 4, 00:26:10.348 "state": "FREE", 00:26:10.348 "validity": 0.0 00:26:10.348 }, 00:26:10.348 { 00:26:10.348 "id": 5, 00:26:10.348 "state": "FREE", 00:26:10.348 "validity": 0.0 00:26:10.348 }, 00:26:10.348 { 00:26:10.348 "id": 6, 00:26:10.348 "state": "FREE", 00:26:10.348 "validity": 0.0 00:26:10.348 }, 00:26:10.348 { 00:26:10.348 "id": 7, 00:26:10.349 "state": "FREE", 00:26:10.349 "validity": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 8, 00:26:10.349 "state": "FREE", 00:26:10.349 "validity": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 9, 00:26:10.349 "state": "FREE", 00:26:10.349 "validity": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 10, 00:26:10.349 "state": "FREE", 00:26:10.349 "validity": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 11, 00:26:10.349 "state": "FREE", 00:26:10.349 "validity": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 12, 00:26:10.349 "state": "FREE", 00:26:10.349 "validity": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 13, 00:26:10.349 "state": "FREE", 00:26:10.349 "validity": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 14, 00:26:10.349 "state": "FREE", 00:26:10.349 "validity": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 15, 00:26:10.349 "state": "FREE", 00:26:10.349 "validity": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 16, 00:26:10.349 "state": "FREE", 00:26:10.349 "validity": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 17, 00:26:10.349 "state": "FREE", 00:26:10.349 "validity": 0.0 00:26:10.349 } 00:26:10.349 ], 00:26:10.349 "read-only": true 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "name": "cache_device", 00:26:10.349 "type": "bdev", 00:26:10.349 "chunks": [ 00:26:10.349 { 00:26:10.349 "id": 0, 00:26:10.349 "state": "INACTIVE", 00:26:10.349 "utilization": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 1, 00:26:10.349 "state": "OPEN", 00:26:10.349 "utilization": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 2, 00:26:10.349 "state": "OPEN", 00:26:10.349 "utilization": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 3, 00:26:10.349 "state": "FREE", 00:26:10.349 "utilization": 0.0 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "id": 4, 00:26:10.349 "state": "FREE", 00:26:10.349 "utilization": 0.0 00:26:10.349 } 00:26:10.349 ], 00:26:10.349 "read-only": true 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "name": "verbose_mode", 00:26:10.349 "value": true, 00:26:10.349 "unit": "", 00:26:10.349 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:10.349 }, 00:26:10.349 { 00:26:10.349 "name": "prep_upgrade_on_shutdown", 00:26:10.349 "value": false, 00:26:10.349 "unit": "", 00:26:10.349 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:10.349 } 00:26:10.349 ] 00:26:10.349 } 00:26:10.349 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:10.349 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:10.349 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:10.608 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:10.608 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:10.608 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:10.608 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:10.608 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:10.866 Validate MD5 checksum, iteration 1 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:10.866 18:00:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:10.866 [2024-11-05 18:00:30.704844] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:26:10.866 [2024-11-05 18:00:30.704967] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92129 ] 00:26:10.866 [2024-11-05 18:00:30.832365] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:11.125 [2024-11-05 18:00:30.860353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.125 [2024-11-05 18:00:30.878335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.512  [2024-11-05T18:00:33.078Z] Copying: 650/1024 [MB] (650 MBps) [2024-11-05T18:00:33.524Z] Copying: 1024/1024 [MB] (average 638 MBps) 00:26:13.533 00:26:13.533 18:00:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:13.533 18:00:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:16.059 Validate MD5 checksum, iteration 2 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cbe1dce3cd7ad452c880246204737006 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cbe1dce3cd7ad452c880246204737006 != \c\b\e\1\d\c\e\3\c\d\7\a\d\4\5\2\c\8\8\0\2\4\6\2\0\4\7\3\7\0\0\6 ]] 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:16.059 18:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:16.059 [2024-11-05 18:00:35.725885] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:26:16.059 [2024-11-05 18:00:35.725994] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92186 ] 00:26:16.059 [2024-11-05 18:00:35.853581] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:16.059 [2024-11-05 18:00:35.880777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.059 [2024-11-05 18:00:35.898669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.433  [2024-11-05T18:00:37.996Z] Copying: 642/1024 [MB] (642 MBps) [2024-11-05T18:00:41.277Z] Copying: 1024/1024 [MB] (average 644 MBps) 00:26:21.286 00:26:21.287 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:21.287 18:00:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b05d1bc4b5b0345f8d5efe834e617983 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b05d1bc4b5b0345f8d5efe834e617983 != \b\0\5\d\1\b\c\4\b\5\b\0\3\4\5\f\8\d\5\e\f\e\8\3\4\e\6\1\7\9\8\3 ]] 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92042 ]] 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92042 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:23.187 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92265 00:26:23.188 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:23.188 18:00:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92265 00:26:23.188 18:00:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 92265 ']' 00:26:23.188 18:00:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.188 18:00:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:23.188 18:00:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.188 18:00:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:23.188 18:00:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:23.188 [2024-11-05 18:00:42.963227] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:26:23.188 [2024-11-05 18:00:42.963840] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92265 ] 00:26:23.188 [2024-11-05 18:00:43.092542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:23.188 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 92042 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:23.188 [2024-11-05 18:00:43.117880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.188 [2024-11-05 18:00:43.140131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.446 [2024-11-05 18:00:43.430115] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:23.446 [2024-11-05 18:00:43.430181] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:23.706 [2024-11-05 18:00:43.568431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.706 [2024-11-05 18:00:43.568474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:23.706 [2024-11-05 18:00:43.568488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:23.706 [2024-11-05 18:00:43.568495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.706 [2024-11-05 18:00:43.568535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.706 [2024-11-05 18:00:43.568543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:23.706 [2024-11-05 18:00:43.568552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:23.706 [2024-11-05 18:00:43.568558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.706 [2024-11-05 18:00:43.568579] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:23.706 [2024-11-05 18:00:43.568817] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:23.706 [2024-11-05 18:00:43.568840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.706 [2024-11-05 18:00:43.568847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:23.706 [2024-11-05 18:00:43.568854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:26:23.706 [2024-11-05 18:00:43.568861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.706 [2024-11-05 18:00:43.569082] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:23.706 [2024-11-05 18:00:43.573051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.706 [2024-11-05 18:00:43.573097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:23.706 [2024-11-05 18:00:43.573107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.969 ms 00:26:23.706 [2024-11-05 18:00:43.573113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.706 [2024-11-05 18:00:43.574083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.706 [2024-11-05 18:00:43.574107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:23.706 [2024-11-05 18:00:43.574119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:26:23.706 [2024-11-05 18:00:43.574127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.706 [2024-11-05 18:00:43.574359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.706 [2024-11-05 18:00:43.574370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:23.706 [2024-11-05 18:00:43.574377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:26:23.706 [2024-11-05 18:00:43.574383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.706 [2024-11-05 18:00:43.574414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.707 [2024-11-05 18:00:43.574421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:23.707 [2024-11-05 18:00:43.574427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:23.707 [2024-11-05 18:00:43.574433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.707 [2024-11-05 18:00:43.574454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.707 [2024-11-05 18:00:43.574463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:23.707 [2024-11-05 18:00:43.574471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:23.707 [2024-11-05 18:00:43.574480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.707 [2024-11-05 18:00:43.574501] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:23.707 [2024-11-05 18:00:43.575351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.707 [2024-11-05 18:00:43.575365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:23.707 [2024-11-05 18:00:43.575377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:26:23.707 [2024-11-05 18:00:43.575383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.707 [2024-11-05 18:00:43.575404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.707 [2024-11-05 18:00:43.575413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:23.707 [2024-11-05 18:00:43.575420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:23.707 [2024-11-05 18:00:43.575427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.707 [2024-11-05 18:00:43.575445] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:23.707 [2024-11-05 18:00:43.575461] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:23.707 [2024-11-05 18:00:43.575487] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:23.707 [2024-11-05 18:00:43.575503] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:23.707 [2024-11-05 18:00:43.575586] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:23.707 [2024-11-05 18:00:43.575596] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:23.707 [2024-11-05 18:00:43.575605] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:23.707 [2024-11-05 18:00:43.575613] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:23.707 [2024-11-05 18:00:43.575620] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:23.707 [2024-11-05 18:00:43.575627] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:23.707 [2024-11-05 18:00:43.575634] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:23.707 [2024-11-05 18:00:43.575639] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:23.707 [2024-11-05 18:00:43.575646] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:23.707 [2024-11-05 18:00:43.575652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.707 [2024-11-05 18:00:43.575659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:23.707 [2024-11-05 18:00:43.575667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:26:23.707 [2024-11-05 18:00:43.575672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.707 [2024-11-05 18:00:43.575737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.707 [2024-11-05 18:00:43.575744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:23.707 [2024-11-05 18:00:43.575757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:26:23.707 [2024-11-05 18:00:43.575763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.707 [2024-11-05 18:00:43.575846] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:23.707 [2024-11-05 18:00:43.575855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:23.707 [2024-11-05 18:00:43.575862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:23.707 [2024-11-05 18:00:43.575870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.707 [2024-11-05 18:00:43.575876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:23.707 [2024-11-05 18:00:43.575881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:23.707 [2024-11-05 18:00:43.575886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:23.707 [2024-11-05 18:00:43.575893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:23.707 [2024-11-05 18:00:43.575899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:23.707 [2024-11-05 18:00:43.575904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.707 [2024-11-05 18:00:43.575909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:23.707 [2024-11-05 18:00:43.575915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:23.707 [2024-11-05 18:00:43.575925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.707 [2024-11-05 18:00:43.575934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:23.707 [2024-11-05 18:00:43.575939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:23.707 [2024-11-05 18:00:43.575944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.707 [2024-11-05 18:00:43.575955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:23.707 [2024-11-05 18:00:43.575961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:23.707 [2024-11-05 18:00:43.575966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.707 [2024-11-05 18:00:43.575972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:23.707 [2024-11-05 18:00:43.575977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:23.707 [2024-11-05 18:00:43.575982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:23.707 [2024-11-05 18:00:43.575987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:23.707 [2024-11-05 18:00:43.575993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:23.707 [2024-11-05 18:00:43.575998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:23.707 [2024-11-05 18:00:43.576004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:23.707 [2024-11-05 18:00:43.576010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:23.707 [2024-11-05 18:00:43.576016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:23.707 [2024-11-05 18:00:43.576023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:23.707 [2024-11-05 18:00:43.576030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:23.707 [2024-11-05 18:00:43.576037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:23.707 [2024-11-05 18:00:43.576043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:23.707 [2024-11-05 18:00:43.576049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:23.707 [2024-11-05 18:00:43.576054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.707 [2024-11-05 18:00:43.576060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:23.707 [2024-11-05 18:00:43.576077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:23.707 [2024-11-05 18:00:43.576083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.707 [2024-11-05 18:00:43.576089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:23.707 [2024-11-05 18:00:43.576095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:23.707 [2024-11-05 18:00:43.576101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.707 [2024-11-05 18:00:43.576107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:23.707 [2024-11-05 18:00:43.576121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:23.707 [2024-11-05 18:00:43.576127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.707 [2024-11-05 18:00:43.576133] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:23.707 [2024-11-05 18:00:43.576142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:23.707 [2024-11-05 18:00:43.576153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:23.707 [2024-11-05 18:00:43.576160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:23.707 [2024-11-05 18:00:43.576168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:23.707 [2024-11-05 18:00:43.576174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:23.707 [2024-11-05 18:00:43.576180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:23.707 [2024-11-05 18:00:43.576186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:23.707 [2024-11-05 18:00:43.576192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:23.707 [2024-11-05 18:00:43.576197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:23.707 [2024-11-05 18:00:43.576204] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:23.707 [2024-11-05 18:00:43.576210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.707 [2024-11-05 18:00:43.576217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:23.707 [2024-11-05 18:00:43.576222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:23.707 [2024-11-05 18:00:43.576228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:23.707 [2024-11-05 18:00:43.576233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:23.707 [2024-11-05 18:00:43.576239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:23.707 [2024-11-05 18:00:43.576244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:23.707 [2024-11-05 18:00:43.576251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:23.707 [2024-11-05 18:00:43.576257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:23.707 [2024-11-05 18:00:43.576262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:23.708 [2024-11-05 18:00:43.576268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:23.708 [2024-11-05 18:00:43.576273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:23.708 [2024-11-05 18:00:43.576278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:23.708 [2024-11-05 18:00:43.576283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:23.708 [2024-11-05 18:00:43.576289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:23.708 [2024-11-05 18:00:43.576294] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:23.708 [2024-11-05 18:00:43.576300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.708 [2024-11-05 18:00:43.576308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:23.708 [2024-11-05 18:00:43.576313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:23.708 [2024-11-05 18:00:43.576319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:23.708 [2024-11-05 18:00:43.576324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:23.708 [2024-11-05 18:00:43.576329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.576340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:23.708 [2024-11-05 18:00:43.576348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.537 ms 00:26:23.708 [2024-11-05 18:00:43.576353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.584643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.584665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:23.708 [2024-11-05 18:00:43.584673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.180 ms 00:26:23.708 [2024-11-05 18:00:43.584682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.584709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.584716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:23.708 [2024-11-05 18:00:43.584723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:23.708 [2024-11-05 18:00:43.584731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.594355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.594380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:23.708 [2024-11-05 18:00:43.594392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.588 ms 00:26:23.708 [2024-11-05 18:00:43.594398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.594421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.594428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:23.708 [2024-11-05 18:00:43.594437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:23.708 [2024-11-05 18:00:43.594445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.594510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.594519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:23.708 [2024-11-05 18:00:43.594529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:26:23.708 [2024-11-05 18:00:43.594538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.594571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.594578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:23.708 [2024-11-05 18:00:43.594584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:23.708 [2024-11-05 18:00:43.594593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.600899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.600922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:23.708 [2024-11-05 18:00:43.600931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.285 ms 00:26:23.708 [2024-11-05 18:00:43.600937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.601012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.601022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:23.708 [2024-11-05 18:00:43.601032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:23.708 [2024-11-05 18:00:43.601038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.619992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.620095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:23.708 [2024-11-05 18:00:43.620123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.931 ms 00:26:23.708 [2024-11-05 18:00:43.620142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.622480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.622543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:23.708 [2024-11-05 18:00:43.622568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.600 ms 00:26:23.708 [2024-11-05 18:00:43.622586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.642512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.642558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:23.708 [2024-11-05 18:00:43.642570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.857 ms 00:26:23.708 [2024-11-05 18:00:43.642577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.642690] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:23.708 [2024-11-05 18:00:43.642779] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:23.708 [2024-11-05 18:00:43.642865] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:23.708 [2024-11-05 18:00:43.642946] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:23.708 [2024-11-05 18:00:43.642954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.642961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:23.708 [2024-11-05 18:00:43.642971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.344 ms 00:26:23.708 [2024-11-05 18:00:43.642977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.643025] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:23.708 [2024-11-05 18:00:43.643035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.643042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:23.708 [2024-11-05 18:00:43.643048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:23.708 [2024-11-05 18:00:43.643054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.644979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.645178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:23.708 [2024-11-05 18:00:43.645197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.888 ms 00:26:23.708 [2024-11-05 18:00:43.645206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.645666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.645688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:23.708 [2024-11-05 18:00:43.645697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:23.708 [2024-11-05 18:00:43.645707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.708 [2024-11-05 18:00:43.645750] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:23.708 [2024-11-05 18:00:43.645906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.708 [2024-11-05 18:00:43.645915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:23.708 [2024-11-05 18:00:43.645927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.157 ms 00:26:23.708 [2024-11-05 18:00:43.645939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.274 [2024-11-05 18:00:44.112136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.274 [2024-11-05 18:00:44.112355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:24.274 [2024-11-05 18:00:44.112377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 465.953 ms 00:26:24.274 [2024-11-05 18:00:44.112388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.274 [2024-11-05 18:00:44.114091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.274 [2024-11-05 18:00:44.114125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:24.274 [2024-11-05 18:00:44.114141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.202 ms 00:26:24.274 [2024-11-05 18:00:44.114151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.274 [2024-11-05 18:00:44.114500] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:24.274 [2024-11-05 18:00:44.114528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.274 [2024-11-05 18:00:44.114538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:24.274 [2024-11-05 18:00:44.114556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.350 ms 00:26:24.274 [2024-11-05 18:00:44.114564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.274 [2024-11-05 18:00:44.114594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.274 [2024-11-05 18:00:44.114606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:24.274 [2024-11-05 18:00:44.114615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:24.274 [2024-11-05 18:00:44.114629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.274 [2024-11-05 18:00:44.114663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 468.909 ms, result 0 00:26:24.274 [2024-11-05 18:00:44.114700] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:24.274 [2024-11-05 18:00:44.114849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.274 [2024-11-05 18:00:44.114860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:24.274 [2024-11-05 18:00:44.114868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.149 ms 00:26:24.274 [2024-11-05 18:00:44.114876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.840 [2024-11-05 18:00:44.610418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.840 [2024-11-05 18:00:44.610468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:24.840 [2024-11-05 18:00:44.610481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 495.121 ms 00:26:24.840 [2024-11-05 18:00:44.610490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.840 [2024-11-05 18:00:44.612021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.840 [2024-11-05 18:00:44.612206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:24.840 [2024-11-05 18:00:44.612222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.104 ms 00:26:24.840 [2024-11-05 18:00:44.612231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.840 [2024-11-05 18:00:44.612644] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:24.840 [2024-11-05 18:00:44.612672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.840 [2024-11-05 18:00:44.612680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:24.840 [2024-11-05 18:00:44.612689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.422 ms 00:26:24.840 [2024-11-05 18:00:44.612697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.840 [2024-11-05 18:00:44.612727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.840 [2024-11-05 18:00:44.612737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:24.840 [2024-11-05 18:00:44.612745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:24.841 [2024-11-05 18:00:44.612752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.841 [2024-11-05 18:00:44.612786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 498.080 ms, result 0 00:26:24.841 [2024-11-05 18:00:44.612829] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:24.841 [2024-11-05 18:00:44.612840] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:24.841 [2024-11-05 18:00:44.612849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.841 [2024-11-05 18:00:44.612857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:24.841 [2024-11-05 18:00:44.612866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 967.113 ms 00:26:24.841 [2024-11-05 18:00:44.612876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.841 [2024-11-05 18:00:44.612906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.841 [2024-11-05 18:00:44.612915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:24.841 [2024-11-05 18:00:44.612923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:24.841 [2024-11-05 18:00:44.612930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.841 [2024-11-05 18:00:44.621206] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:24.841 [2024-11-05 18:00:44.621320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.841 [2024-11-05 18:00:44.621339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:24.841 [2024-11-05 18:00:44.621349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.373 ms 00:26:24.841 [2024-11-05 18:00:44.621359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.841 [2024-11-05 18:00:44.622042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.841 [2024-11-05 18:00:44.622086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:24.841 [2024-11-05 18:00:44.622096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.616 ms 00:26:24.841 [2024-11-05 18:00:44.622103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.841 [2024-11-05 18:00:44.624324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.841 [2024-11-05 18:00:44.624349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:24.841 [2024-11-05 18:00:44.624359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.204 ms 00:26:24.841 [2024-11-05 18:00:44.624368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.841 [2024-11-05 18:00:44.624406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.841 [2024-11-05 18:00:44.624416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:24.841 [2024-11-05 18:00:44.624425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:24.841 [2024-11-05 18:00:44.624433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.841 [2024-11-05 18:00:44.624537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.841 [2024-11-05 18:00:44.624546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:24.841 [2024-11-05 18:00:44.624557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:24.841 [2024-11-05 18:00:44.624564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.841 [2024-11-05 18:00:44.624586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.841 [2024-11-05 18:00:44.624594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:24.841 [2024-11-05 18:00:44.624601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:24.841 [2024-11-05 18:00:44.624616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.841 [2024-11-05 18:00:44.624643] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:24.841 [2024-11-05 18:00:44.624653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.841 [2024-11-05 18:00:44.624661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:24.841 [2024-11-05 18:00:44.624668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:24.841 [2024-11-05 18:00:44.624678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.841 [2024-11-05 18:00:44.624735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.841 [2024-11-05 18:00:44.624744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:24.841 [2024-11-05 18:00:44.624753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:24.841 [2024-11-05 18:00:44.624760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.841 [2024-11-05 18:00:44.625824] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1056.971 ms, result 0 00:26:24.841 [2024-11-05 18:00:44.637641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.841 [2024-11-05 18:00:44.653638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:24.841 [2024-11-05 18:00:44.661777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:24.841 Validate MD5 checksum, iteration 1 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:24.841 18:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:24.841 [2024-11-05 18:00:44.749139] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:26:24.841 [2024-11-05 18:00:44.749576] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92292 ] 00:26:25.100 [2024-11-05 18:00:44.878553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:25.100 [2024-11-05 18:00:44.909831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.100 [2024-11-05 18:00:44.927853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.473  [2024-11-05T18:00:47.029Z] Copying: 647/1024 [MB] (647 MBps) [2024-11-05T18:00:47.963Z] Copying: 1024/1024 [MB] (average 639 MBps) 00:26:27.972 00:26:27.972 18:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:27.972 18:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cbe1dce3cd7ad452c880246204737006 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cbe1dce3cd7ad452c880246204737006 != \c\b\e\1\d\c\e\3\c\d\7\a\d\4\5\2\c\8\8\0\2\4\6\2\0\4\7\3\7\0\0\6 ]] 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:29.873 Validate MD5 checksum, iteration 2 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:29.873 18:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:29.873 [2024-11-05 18:00:49.525559] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:26:29.873 [2024-11-05 18:00:49.525807] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92348 ] 00:26:29.873 [2024-11-05 18:00:49.654146] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:29.873 [2024-11-05 18:00:49.685666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.873 [2024-11-05 18:00:49.703984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.247  [2024-11-05T18:00:51.805Z] Copying: 676/1024 [MB] (676 MBps) [2024-11-05T18:00:52.370Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:26:32.379 00:26:32.379 18:00:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:32.379 18:00:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b05d1bc4b5b0345f8d5efe834e617983 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b05d1bc4b5b0345f8d5efe834e617983 != \b\0\5\d\1\b\c\4\b\5\b\0\3\4\5\f\8\d\5\e\f\e\8\3\4\e\6\1\7\9\8\3 ]] 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92265 ]] 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92265 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 92265 ']' 00:26:34.909 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 92265 00:26:34.910 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:26:34.910 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:34.910 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92265 00:26:34.910 killing process with pid 92265 00:26:34.910 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:34.910 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:34.910 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92265' 00:26:34.910 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 92265 00:26:34.910 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 92265 00:26:34.910 [2024-11-05 18:00:54.665331] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:34.910 [2024-11-05 18:00:54.669388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.669498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:34.910 [2024-11-05 18:00:54.669555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:34.910 [2024-11-05 18:00:54.669575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.669607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:34.910 [2024-11-05 18:00:54.670206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.670293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:34.910 [2024-11-05 18:00:54.670339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.535 ms 00:26:34.910 [2024-11-05 18:00:54.670361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.670592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.670652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:34.910 [2024-11-05 18:00:54.670689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:26:34.910 [2024-11-05 18:00:54.670706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.671884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.671906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:34.910 [2024-11-05 18:00:54.671914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.154 ms 00:26:34.910 [2024-11-05 18:00:54.671921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.672788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.672871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:34.910 [2024-11-05 18:00:54.672882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.840 ms 00:26:34.910 [2024-11-05 18:00:54.672889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.674358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.674382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:34.910 [2024-11-05 18:00:54.674390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.434 ms 00:26:34.910 [2024-11-05 18:00:54.674396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.675671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.675701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:34.910 [2024-11-05 18:00:54.675709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.242 ms 00:26:34.910 [2024-11-05 18:00:54.675715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.675777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.675789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:34.910 [2024-11-05 18:00:54.675797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:26:34.910 [2024-11-05 18:00:54.675803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.676901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.676979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:34.910 [2024-11-05 18:00:54.677021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.084 ms 00:26:34.910 [2024-11-05 18:00:54.677040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.678464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.678545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:34.910 [2024-11-05 18:00:54.678610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.383 ms 00:26:34.910 [2024-11-05 18:00:54.678629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.679788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.679872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:34.910 [2024-11-05 18:00:54.679910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.127 ms 00:26:34.910 [2024-11-05 18:00:54.679926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.681057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.681145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:34.910 [2024-11-05 18:00:54.681156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.076 ms 00:26:34.910 [2024-11-05 18:00:54.681162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.681184] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:34.910 [2024-11-05 18:00:54.681200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:34.910 [2024-11-05 18:00:54.681210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:34.910 [2024-11-05 18:00:54.681216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:34.910 [2024-11-05 18:00:54.681223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:34.910 [2024-11-05 18:00:54.681317] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:34.910 [2024-11-05 18:00:54.681323] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5b175487-60f0-4f62-9149-393cee6672a5 00:26:34.910 [2024-11-05 18:00:54.681330] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:34.910 [2024-11-05 18:00:54.681336] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:34.910 [2024-11-05 18:00:54.681345] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:34.910 [2024-11-05 18:00:54.681351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:34.910 [2024-11-05 18:00:54.681356] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:34.910 [2024-11-05 18:00:54.681363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:34.910 [2024-11-05 18:00:54.681369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:34.910 [2024-11-05 18:00:54.681374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:34.910 [2024-11-05 18:00:54.681379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:34.910 [2024-11-05 18:00:54.681385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.681392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:34.910 [2024-11-05 18:00:54.681401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:26:34.910 [2024-11-05 18:00:54.681408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.683017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.683127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:34.910 [2024-11-05 18:00:54.683139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.596 ms 00:26:34.910 [2024-11-05 18:00:54.683146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.910 [2024-11-05 18:00:54.683239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:34.910 [2024-11-05 18:00:54.683251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:34.910 [2024-11-05 18:00:54.683259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:26:34.910 [2024-11-05 18:00:54.683265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.689003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.689030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:34.911 [2024-11-05 18:00:54.689038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.689044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.689083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.689095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:34.911 [2024-11-05 18:00:54.689102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.689108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.689166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.689176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:34.911 [2024-11-05 18:00:54.689182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.689192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.689206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.689214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:34.911 [2024-11-05 18:00:54.689222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.689227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.700080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.700110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:34.911 [2024-11-05 18:00:54.700118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.700125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.708260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.708296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:34.911 [2024-11-05 18:00:54.708305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.708311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.708368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.708377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:34.911 [2024-11-05 18:00:54.708384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.708391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.708418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.708425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:34.911 [2024-11-05 18:00:54.708436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.708444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.708500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.708508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:34.911 [2024-11-05 18:00:54.708515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.708520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.708544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.708552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:34.911 [2024-11-05 18:00:54.708558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.708564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.708602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.708613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:34.911 [2024-11-05 18:00:54.708619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.708625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.708660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:34.911 [2024-11-05 18:00:54.708672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:34.911 [2024-11-05 18:00:54.708678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:34.911 [2024-11-05 18:00:54.708686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:34.911 [2024-11-05 18:00:54.708798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 39.373 ms, result 0 00:26:34.911 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:34.911 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:34.911 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:34.911 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:34.911 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:34.911 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:34.911 Remove shared memory files 00:26:34.911 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:34.911 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:34.911 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:35.170 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:35.170 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92042 00:26:35.170 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:35.170 18:00:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:35.170 00:26:35.170 real 1m14.016s 00:26:35.170 user 1m37.744s 00:26:35.170 sys 0m19.252s 00:26:35.170 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:35.170 ************************************ 00:26:35.170 END TEST ftl_upgrade_shutdown 00:26:35.170 ************************************ 00:26:35.170 18:00:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:35.170 18:00:54 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:26:35.170 18:00:54 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:35.170 18:00:54 ftl -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:26:35.170 18:00:54 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:35.170 18:00:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:35.170 ************************************ 00:26:35.170 START TEST ftl_restore_fast 00:26:35.170 ************************************ 00:26:35.170 18:00:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:35.170 * Looking for test storage... 00:26:35.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lcov --version 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.170 --rc genhtml_branch_coverage=1 00:26:35.170 --rc genhtml_function_coverage=1 00:26:35.170 --rc genhtml_legend=1 00:26:35.170 --rc geninfo_all_blocks=1 00:26:35.170 --rc geninfo_unexecuted_blocks=1 00:26:35.170 00:26:35.170 ' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.170 --rc genhtml_branch_coverage=1 00:26:35.170 --rc genhtml_function_coverage=1 00:26:35.170 --rc genhtml_legend=1 00:26:35.170 --rc geninfo_all_blocks=1 00:26:35.170 --rc geninfo_unexecuted_blocks=1 00:26:35.170 00:26:35.170 ' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.170 --rc genhtml_branch_coverage=1 00:26:35.170 --rc genhtml_function_coverage=1 00:26:35.170 --rc genhtml_legend=1 00:26:35.170 --rc geninfo_all_blocks=1 00:26:35.170 --rc geninfo_unexecuted_blocks=1 00:26:35.170 00:26:35.170 ' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.170 --rc genhtml_branch_coverage=1 00:26:35.170 --rc genhtml_function_coverage=1 00:26:35.170 --rc genhtml_legend=1 00:26:35.170 --rc geninfo_all_blocks=1 00:26:35.170 --rc geninfo_unexecuted_blocks=1 00:26:35.170 00:26:35.170 ' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.cZuKOHmxZz 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92482 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92482 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # '[' -z 92482 ']' 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:35.170 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.171 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:35.171 18:00:55 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:26:35.429 [2024-11-05 18:00:55.187343] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:26:35.429 [2024-11-05 18:00:55.187602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92482 ] 00:26:35.429 [2024-11-05 18:00:55.317156] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:35.429 [2024-11-05 18:00:55.346653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.429 [2024-11-05 18:00:55.371117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@866 -- # return 0 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:36.372 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:26:36.373 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:36.373 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:26:36.373 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:26:36.373 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:36.635 { 00:26:36.635 "name": "nvme0n1", 00:26:36.635 "aliases": [ 00:26:36.635 "4d14d158-5b36-45aa-911b-db14be9a1785" 00:26:36.635 ], 00:26:36.635 "product_name": "NVMe disk", 00:26:36.635 "block_size": 4096, 00:26:36.635 "num_blocks": 1310720, 00:26:36.635 "uuid": "4d14d158-5b36-45aa-911b-db14be9a1785", 00:26:36.635 "numa_id": -1, 00:26:36.635 "assigned_rate_limits": { 00:26:36.635 "rw_ios_per_sec": 0, 00:26:36.635 "rw_mbytes_per_sec": 0, 00:26:36.635 "r_mbytes_per_sec": 0, 00:26:36.635 "w_mbytes_per_sec": 0 00:26:36.635 }, 00:26:36.635 "claimed": true, 00:26:36.635 "claim_type": "read_many_write_one", 00:26:36.635 "zoned": false, 00:26:36.635 "supported_io_types": { 00:26:36.635 "read": true, 00:26:36.635 "write": true, 00:26:36.635 "unmap": true, 00:26:36.635 "flush": true, 00:26:36.635 "reset": true, 00:26:36.635 "nvme_admin": true, 00:26:36.635 "nvme_io": true, 00:26:36.635 "nvme_io_md": false, 00:26:36.635 "write_zeroes": true, 00:26:36.635 "zcopy": false, 00:26:36.635 "get_zone_info": false, 00:26:36.635 "zone_management": false, 00:26:36.635 "zone_append": false, 00:26:36.635 "compare": true, 00:26:36.635 "compare_and_write": false, 00:26:36.635 "abort": true, 00:26:36.635 "seek_hole": false, 00:26:36.635 "seek_data": false, 00:26:36.635 "copy": true, 00:26:36.635 "nvme_iov_md": false 00:26:36.635 }, 00:26:36.635 "driver_specific": { 00:26:36.635 "nvme": [ 00:26:36.635 { 00:26:36.635 "pci_address": "0000:00:11.0", 00:26:36.635 "trid": { 00:26:36.635 "trtype": "PCIe", 00:26:36.635 "traddr": "0000:00:11.0" 00:26:36.635 }, 00:26:36.635 "ctrlr_data": { 00:26:36.635 "cntlid": 0, 00:26:36.635 "vendor_id": "0x1b36", 00:26:36.635 "model_number": "QEMU NVMe Ctrl", 00:26:36.635 "serial_number": "12341", 00:26:36.635 "firmware_revision": "8.0.0", 00:26:36.635 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:36.635 "oacs": { 00:26:36.635 "security": 0, 00:26:36.635 "format": 1, 00:26:36.635 "firmware": 0, 00:26:36.635 "ns_manage": 1 00:26:36.635 }, 00:26:36.635 "multi_ctrlr": false, 00:26:36.635 "ana_reporting": false 00:26:36.635 }, 00:26:36.635 "vs": { 00:26:36.635 "nvme_version": "1.4" 00:26:36.635 }, 00:26:36.635 "ns_data": { 00:26:36.635 "id": 1, 00:26:36.635 "can_share": false 00:26:36.635 } 00:26:36.635 } 00:26:36.635 ], 00:26:36.635 "mp_policy": "active_passive" 00:26:36.635 } 00:26:36.635 } 00:26:36.635 ]' 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=1310720 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 5120 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:36.635 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:36.896 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=f74480dd-c6ee-4d77-ab53-5a28cdc38bcf 00:26:36.896 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:26:36.896 18:00:56 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f74480dd-c6ee-4d77-ab53-5a28cdc38bcf 00:26:37.156 18:00:57 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:37.416 18:00:57 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=6dfa23b5-97ae-4fcb-a95e-855b83d9a5ea 00:26:37.416 18:00:57 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6dfa23b5-97ae-4fcb-a95e-855b83d9a5ea 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:26:37.675 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:37.933 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:37.933 { 00:26:37.933 "name": "9a1d0882-8d42-45c0-a47f-0c0e3c645726", 00:26:37.933 "aliases": [ 00:26:37.933 "lvs/nvme0n1p0" 00:26:37.933 ], 00:26:37.933 "product_name": "Logical Volume", 00:26:37.933 "block_size": 4096, 00:26:37.933 "num_blocks": 26476544, 00:26:37.933 "uuid": "9a1d0882-8d42-45c0-a47f-0c0e3c645726", 00:26:37.933 "assigned_rate_limits": { 00:26:37.933 "rw_ios_per_sec": 0, 00:26:37.933 "rw_mbytes_per_sec": 0, 00:26:37.933 "r_mbytes_per_sec": 0, 00:26:37.933 "w_mbytes_per_sec": 0 00:26:37.933 }, 00:26:37.933 "claimed": false, 00:26:37.933 "zoned": false, 00:26:37.933 "supported_io_types": { 00:26:37.933 "read": true, 00:26:37.933 "write": true, 00:26:37.933 "unmap": true, 00:26:37.933 "flush": false, 00:26:37.933 "reset": true, 00:26:37.933 "nvme_admin": false, 00:26:37.933 "nvme_io": false, 00:26:37.933 "nvme_io_md": false, 00:26:37.933 "write_zeroes": true, 00:26:37.933 "zcopy": false, 00:26:37.933 "get_zone_info": false, 00:26:37.933 "zone_management": false, 00:26:37.933 "zone_append": false, 00:26:37.933 "compare": false, 00:26:37.933 "compare_and_write": false, 00:26:37.933 "abort": false, 00:26:37.933 "seek_hole": true, 00:26:37.933 "seek_data": true, 00:26:37.933 "copy": false, 00:26:37.934 "nvme_iov_md": false 00:26:37.934 }, 00:26:37.934 "driver_specific": { 00:26:37.934 "lvol": { 00:26:37.934 "lvol_store_uuid": "6dfa23b5-97ae-4fcb-a95e-855b83d9a5ea", 00:26:37.934 "base_bdev": "nvme0n1", 00:26:37.934 "thin_provision": true, 00:26:37.934 "num_allocated_clusters": 0, 00:26:37.934 "snapshot": false, 00:26:37.934 "clone": false, 00:26:37.934 "esnap_clone": false 00:26:37.934 } 00:26:37.934 } 00:26:37.934 } 00:26:37.934 ]' 00:26:37.934 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:37.934 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:26:37.934 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:37.934 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:26:37.934 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:26:37.934 18:00:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:26:37.934 18:00:57 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:26:37.934 18:00:57 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:26:37.934 18:00:57 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:38.192 18:00:58 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:38.192 18:00:58 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:38.192 18:00:58 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:38.192 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:38.192 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:38.192 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:26:38.192 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:26:38.192 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:38.450 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:38.450 { 00:26:38.450 "name": "9a1d0882-8d42-45c0-a47f-0c0e3c645726", 00:26:38.450 "aliases": [ 00:26:38.450 "lvs/nvme0n1p0" 00:26:38.450 ], 00:26:38.450 "product_name": "Logical Volume", 00:26:38.450 "block_size": 4096, 00:26:38.450 "num_blocks": 26476544, 00:26:38.450 "uuid": "9a1d0882-8d42-45c0-a47f-0c0e3c645726", 00:26:38.450 "assigned_rate_limits": { 00:26:38.450 "rw_ios_per_sec": 0, 00:26:38.450 "rw_mbytes_per_sec": 0, 00:26:38.450 "r_mbytes_per_sec": 0, 00:26:38.450 "w_mbytes_per_sec": 0 00:26:38.450 }, 00:26:38.450 "claimed": false, 00:26:38.450 "zoned": false, 00:26:38.450 "supported_io_types": { 00:26:38.450 "read": true, 00:26:38.450 "write": true, 00:26:38.450 "unmap": true, 00:26:38.450 "flush": false, 00:26:38.450 "reset": true, 00:26:38.450 "nvme_admin": false, 00:26:38.450 "nvme_io": false, 00:26:38.450 "nvme_io_md": false, 00:26:38.450 "write_zeroes": true, 00:26:38.450 "zcopy": false, 00:26:38.450 "get_zone_info": false, 00:26:38.450 "zone_management": false, 00:26:38.450 "zone_append": false, 00:26:38.450 "compare": false, 00:26:38.450 "compare_and_write": false, 00:26:38.450 "abort": false, 00:26:38.450 "seek_hole": true, 00:26:38.450 "seek_data": true, 00:26:38.450 "copy": false, 00:26:38.450 "nvme_iov_md": false 00:26:38.450 }, 00:26:38.450 "driver_specific": { 00:26:38.450 "lvol": { 00:26:38.450 "lvol_store_uuid": "6dfa23b5-97ae-4fcb-a95e-855b83d9a5ea", 00:26:38.450 "base_bdev": "nvme0n1", 00:26:38.450 "thin_provision": true, 00:26:38.450 "num_allocated_clusters": 0, 00:26:38.450 "snapshot": false, 00:26:38.450 "clone": false, 00:26:38.450 "esnap_clone": false 00:26:38.450 } 00:26:38.450 } 00:26:38.450 } 00:26:38.450 ]' 00:26:38.450 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:38.450 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:26:38.450 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:38.450 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:26:38.450 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:26:38.450 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:26:38.450 18:00:58 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:26:38.450 18:00:58 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:38.708 18:00:58 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:38.708 18:00:58 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:38.708 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:38.708 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:38.708 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:26:38.708 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:26:38.708 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a1d0882-8d42-45c0-a47f-0c0e3c645726 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:38.967 { 00:26:38.967 "name": "9a1d0882-8d42-45c0-a47f-0c0e3c645726", 00:26:38.967 "aliases": [ 00:26:38.967 "lvs/nvme0n1p0" 00:26:38.967 ], 00:26:38.967 "product_name": "Logical Volume", 00:26:38.967 "block_size": 4096, 00:26:38.967 "num_blocks": 26476544, 00:26:38.967 "uuid": "9a1d0882-8d42-45c0-a47f-0c0e3c645726", 00:26:38.967 "assigned_rate_limits": { 00:26:38.967 "rw_ios_per_sec": 0, 00:26:38.967 "rw_mbytes_per_sec": 0, 00:26:38.967 "r_mbytes_per_sec": 0, 00:26:38.967 "w_mbytes_per_sec": 0 00:26:38.967 }, 00:26:38.967 "claimed": false, 00:26:38.967 "zoned": false, 00:26:38.967 "supported_io_types": { 00:26:38.967 "read": true, 00:26:38.967 "write": true, 00:26:38.967 "unmap": true, 00:26:38.967 "flush": false, 00:26:38.967 "reset": true, 00:26:38.967 "nvme_admin": false, 00:26:38.967 "nvme_io": false, 00:26:38.967 "nvme_io_md": false, 00:26:38.967 "write_zeroes": true, 00:26:38.967 "zcopy": false, 00:26:38.967 "get_zone_info": false, 00:26:38.967 "zone_management": false, 00:26:38.967 "zone_append": false, 00:26:38.967 "compare": false, 00:26:38.967 "compare_and_write": false, 00:26:38.967 "abort": false, 00:26:38.967 "seek_hole": true, 00:26:38.967 "seek_data": true, 00:26:38.967 "copy": false, 00:26:38.967 "nvme_iov_md": false 00:26:38.967 }, 00:26:38.967 "driver_specific": { 00:26:38.967 "lvol": { 00:26:38.967 "lvol_store_uuid": "6dfa23b5-97ae-4fcb-a95e-855b83d9a5ea", 00:26:38.967 "base_bdev": "nvme0n1", 00:26:38.967 "thin_provision": true, 00:26:38.967 "num_allocated_clusters": 0, 00:26:38.967 "snapshot": false, 00:26:38.967 "clone": false, 00:26:38.967 "esnap_clone": false 00:26:38.967 } 00:26:38.967 } 00:26:38.967 } 00:26:38.967 ]' 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9a1d0882-8d42-45c0-a47f-0c0e3c645726 --l2p_dram_limit 10' 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:26:38.967 18:00:58 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9a1d0882-8d42-45c0-a47f-0c0e3c645726 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:26:39.227 [2024-11-05 18:00:58.967165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.227 [2024-11-05 18:00:58.967301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:39.227 [2024-11-05 18:00:58.967321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:39.227 [2024-11-05 18:00:58.967329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.227 [2024-11-05 18:00:58.967385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.227 [2024-11-05 18:00:58.967394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:39.227 [2024-11-05 18:00:58.967406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:39.227 [2024-11-05 18:00:58.967412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.227 [2024-11-05 18:00:58.967433] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:39.227 [2024-11-05 18:00:58.967631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:39.227 [2024-11-05 18:00:58.967646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.227 [2024-11-05 18:00:58.967653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:39.227 [2024-11-05 18:00:58.967662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:26:39.227 [2024-11-05 18:00:58.967673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.227 [2024-11-05 18:00:58.967776] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b68b3d0c-2388-4054-9d20-b1208a1cd419 00:26:39.227 [2024-11-05 18:00:58.969053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.227 [2024-11-05 18:00:58.969090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:39.227 [2024-11-05 18:00:58.969099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:39.227 [2024-11-05 18:00:58.969107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.227 [2024-11-05 18:00:58.975939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.227 [2024-11-05 18:00:58.975971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:39.227 [2024-11-05 18:00:58.975978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.800 ms 00:26:39.227 [2024-11-05 18:00:58.975989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.227 [2024-11-05 18:00:58.976061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.227 [2024-11-05 18:00:58.976092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:39.227 [2024-11-05 18:00:58.976098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:39.227 [2024-11-05 18:00:58.976107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.227 [2024-11-05 18:00:58.976150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.227 [2024-11-05 18:00:58.976160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:39.227 [2024-11-05 18:00:58.976167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:39.227 [2024-11-05 18:00:58.976174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.227 [2024-11-05 18:00:58.976193] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:39.227 [2024-11-05 18:00:58.977865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.227 [2024-11-05 18:00:58.977892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:39.227 [2024-11-05 18:00:58.977902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:26:39.227 [2024-11-05 18:00:58.977909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.227 [2024-11-05 18:00:58.977940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.227 [2024-11-05 18:00:58.977947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:39.227 [2024-11-05 18:00:58.977958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:39.227 [2024-11-05 18:00:58.977965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.227 [2024-11-05 18:00:58.977981] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:39.227 [2024-11-05 18:00:58.978106] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:39.227 [2024-11-05 18:00:58.978121] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:39.227 [2024-11-05 18:00:58.978131] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:39.227 [2024-11-05 18:00:58.978144] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:39.227 [2024-11-05 18:00:58.978154] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:39.227 [2024-11-05 18:00:58.978170] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:39.227 [2024-11-05 18:00:58.978180] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:39.227 [2024-11-05 18:00:58.978187] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:39.227 [2024-11-05 18:00:58.978194] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:39.227 [2024-11-05 18:00:58.978202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.227 [2024-11-05 18:00:58.978209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:39.227 [2024-11-05 18:00:58.978216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:26:39.227 [2024-11-05 18:00:58.978222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.227 [2024-11-05 18:00:58.978288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.227 [2024-11-05 18:00:58.978387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:39.227 [2024-11-05 18:00:58.978401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:39.227 [2024-11-05 18:00:58.978409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.227 [2024-11-05 18:00:58.978486] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:39.227 [2024-11-05 18:00:58.978495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:39.227 [2024-11-05 18:00:58.978504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:39.227 [2024-11-05 18:00:58.978513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.227 [2024-11-05 18:00:58.978523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:39.227 [2024-11-05 18:00:58.978529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:39.227 [2024-11-05 18:00:58.978537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:39.227 [2024-11-05 18:00:58.978542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:39.227 [2024-11-05 18:00:58.978549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:39.227 [2024-11-05 18:00:58.978555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:39.227 [2024-11-05 18:00:58.978562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:39.227 [2024-11-05 18:00:58.978567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:39.227 [2024-11-05 18:00:58.978575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:39.227 [2024-11-05 18:00:58.978581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:39.227 [2024-11-05 18:00:58.978588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:39.227 [2024-11-05 18:00:58.978593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.227 [2024-11-05 18:00:58.978600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:39.227 [2024-11-05 18:00:58.978605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:39.227 [2024-11-05 18:00:58.978612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.227 [2024-11-05 18:00:58.978618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:39.227 [2024-11-05 18:00:58.978625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:39.227 [2024-11-05 18:00:58.978631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.227 [2024-11-05 18:00:58.978643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:39.227 [2024-11-05 18:00:58.978649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:39.227 [2024-11-05 18:00:58.978655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.227 [2024-11-05 18:00:58.978660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:39.227 [2024-11-05 18:00:58.978667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:39.227 [2024-11-05 18:00:58.978672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.227 [2024-11-05 18:00:58.978681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:39.227 [2024-11-05 18:00:58.978686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:39.227 [2024-11-05 18:00:58.978694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.227 [2024-11-05 18:00:58.978699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:39.227 [2024-11-05 18:00:58.978706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:39.227 [2024-11-05 18:00:58.978711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:39.227 [2024-11-05 18:00:58.978719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:39.227 [2024-11-05 18:00:58.978724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:39.227 [2024-11-05 18:00:58.978730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:39.227 [2024-11-05 18:00:58.978735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:39.227 [2024-11-05 18:00:58.978742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:39.227 [2024-11-05 18:00:58.978748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.228 [2024-11-05 18:00:58.978754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:39.228 [2024-11-05 18:00:58.978759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:39.228 [2024-11-05 18:00:58.978765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.228 [2024-11-05 18:00:58.978770] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:39.228 [2024-11-05 18:00:58.978779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:39.228 [2024-11-05 18:00:58.978786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:39.228 [2024-11-05 18:00:58.978792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.228 [2024-11-05 18:00:58.978808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:39.228 [2024-11-05 18:00:58.978815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:39.228 [2024-11-05 18:00:58.978821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:39.228 [2024-11-05 18:00:58.978828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:39.228 [2024-11-05 18:00:58.978832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:39.228 [2024-11-05 18:00:58.978839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:39.228 [2024-11-05 18:00:58.978848] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:39.228 [2024-11-05 18:00:58.978864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.228 [2024-11-05 18:00:58.978873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:39.228 [2024-11-05 18:00:58.978881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:39.228 [2024-11-05 18:00:58.978888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:39.228 [2024-11-05 18:00:58.978894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:39.228 [2024-11-05 18:00:58.978900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:39.228 [2024-11-05 18:00:58.978908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:39.228 [2024-11-05 18:00:58.978914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:39.228 [2024-11-05 18:00:58.978921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:39.228 [2024-11-05 18:00:58.978926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:39.228 [2024-11-05 18:00:58.978933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:39.228 [2024-11-05 18:00:58.978939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:39.228 [2024-11-05 18:00:58.978947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:39.228 [2024-11-05 18:00:58.978953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:39.228 [2024-11-05 18:00:58.978960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:39.228 [2024-11-05 18:00:58.978965] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:39.228 [2024-11-05 18:00:58.978974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.228 [2024-11-05 18:00:58.978981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:39.228 [2024-11-05 18:00:58.978989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:39.228 [2024-11-05 18:00:58.978996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:39.228 [2024-11-05 18:00:58.979003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:39.228 [2024-11-05 18:00:58.979010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.228 [2024-11-05 18:00:58.979020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:39.228 [2024-11-05 18:00:58.979028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:26:39.228 [2024-11-05 18:00:58.979036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.228 [2024-11-05 18:00:58.979078] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:39.228 [2024-11-05 18:00:58.979089] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:41.763 [2024-11-05 18:01:01.615324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.763 [2024-11-05 18:01:01.615386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:41.763 [2024-11-05 18:01:01.615399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2636.236 ms 00:26:41.763 [2024-11-05 18:01:01.615408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.763 [2024-11-05 18:01:01.626000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.763 [2024-11-05 18:01:01.626040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:41.763 [2024-11-05 18:01:01.626057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.515 ms 00:26:41.763 [2024-11-05 18:01:01.626099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.763 [2024-11-05 18:01:01.626188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.763 [2024-11-05 18:01:01.626199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:41.763 [2024-11-05 18:01:01.626207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:41.763 [2024-11-05 18:01:01.626216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.763 [2024-11-05 18:01:01.636082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.763 [2024-11-05 18:01:01.636115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:41.763 [2024-11-05 18:01:01.636124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.833 ms 00:26:41.763 [2024-11-05 18:01:01.636135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.763 [2024-11-05 18:01:01.636157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.763 [2024-11-05 18:01:01.636165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:41.763 [2024-11-05 18:01:01.636173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:41.763 [2024-11-05 18:01:01.636181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.763 [2024-11-05 18:01:01.636581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.763 [2024-11-05 18:01:01.636600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:41.763 [2024-11-05 18:01:01.636609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:26:41.763 [2024-11-05 18:01:01.636620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.763 [2024-11-05 18:01:01.636710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.763 [2024-11-05 18:01:01.636720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:41.763 [2024-11-05 18:01:01.636727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:41.763 [2024-11-05 18:01:01.636735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.763 [2024-11-05 18:01:01.643250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.763 [2024-11-05 18:01:01.643421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:41.763 [2024-11-05 18:01:01.643435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.499 ms 00:26:41.763 [2024-11-05 18:01:01.643443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.763 [2024-11-05 18:01:01.650999] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:41.763 [2024-11-05 18:01:01.653906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.763 [2024-11-05 18:01:01.654008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:41.763 [2024-11-05 18:01:01.654024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.403 ms 00:26:41.763 [2024-11-05 18:01:01.654031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.772875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.024 [2024-11-05 18:01:01.772915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:42.024 [2024-11-05 18:01:01.772933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.817 ms 00:26:42.024 [2024-11-05 18:01:01.772940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.773108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.024 [2024-11-05 18:01:01.773117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:42.024 [2024-11-05 18:01:01.773126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:26:42.024 [2024-11-05 18:01:01.773132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.778020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.024 [2024-11-05 18:01:01.778048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:42.024 [2024-11-05 18:01:01.778061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.861 ms 00:26:42.024 [2024-11-05 18:01:01.778081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.781929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.024 [2024-11-05 18:01:01.782053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:42.024 [2024-11-05 18:01:01.782078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.816 ms 00:26:42.024 [2024-11-05 18:01:01.782084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.782339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.024 [2024-11-05 18:01:01.782348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:42.024 [2024-11-05 18:01:01.782358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:26:42.024 [2024-11-05 18:01:01.782365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.851479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.024 [2024-11-05 18:01:01.851513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:42.024 [2024-11-05 18:01:01.851525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.095 ms 00:26:42.024 [2024-11-05 18:01:01.851531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.857183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.024 [2024-11-05 18:01:01.857209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:42.024 [2024-11-05 18:01:01.857219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.613 ms 00:26:42.024 [2024-11-05 18:01:01.857225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.860825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.024 [2024-11-05 18:01:01.860939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:42.024 [2024-11-05 18:01:01.860954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.569 ms 00:26:42.024 [2024-11-05 18:01:01.860961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.864254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.024 [2024-11-05 18:01:01.864280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:42.024 [2024-11-05 18:01:01.864291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.265 ms 00:26:42.024 [2024-11-05 18:01:01.864297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.864329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.024 [2024-11-05 18:01:01.864336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:42.024 [2024-11-05 18:01:01.864348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:42.024 [2024-11-05 18:01:01.864354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.864412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.024 [2024-11-05 18:01:01.864419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:42.024 [2024-11-05 18:01:01.864434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:42.024 [2024-11-05 18:01:01.864442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.024 [2024-11-05 18:01:01.865419] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2897.741 ms, result 0 00:26:42.024 { 00:26:42.024 "name": "ftl0", 00:26:42.024 "uuid": "b68b3d0c-2388-4054-9d20-b1208a1cd419" 00:26:42.024 } 00:26:42.024 18:01:01 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:42.024 18:01:01 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:42.284 18:01:02 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:26:42.284 18:01:02 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:42.547 [2024-11-05 18:01:02.290893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.547 [2024-11-05 18:01:02.290950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:42.548 [2024-11-05 18:01:02.290966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:42.548 [2024-11-05 18:01:02.290975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.290996] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:42.548 [2024-11-05 18:01:02.291584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.548 [2024-11-05 18:01:02.291607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:42.548 [2024-11-05 18:01:02.291617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:26:42.548 [2024-11-05 18:01:02.291624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.291831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.548 [2024-11-05 18:01:02.291845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:42.548 [2024-11-05 18:01:02.291857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:26:42.548 [2024-11-05 18:01:02.291867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.294288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.548 [2024-11-05 18:01:02.294433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:42.548 [2024-11-05 18:01:02.294448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.405 ms 00:26:42.548 [2024-11-05 18:01:02.294455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.299125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.548 [2024-11-05 18:01:02.299148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:42.548 [2024-11-05 18:01:02.299159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.649 ms 00:26:42.548 [2024-11-05 18:01:02.299168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.301333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.548 [2024-11-05 18:01:02.301427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:42.548 [2024-11-05 18:01:02.301443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:26:42.548 [2024-11-05 18:01:02.301449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.306400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.548 [2024-11-05 18:01:02.306427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:42.548 [2024-11-05 18:01:02.306445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.921 ms 00:26:42.548 [2024-11-05 18:01:02.306452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.306549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.548 [2024-11-05 18:01:02.306557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:42.548 [2024-11-05 18:01:02.306568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:42.548 [2024-11-05 18:01:02.306574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.308882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.548 [2024-11-05 18:01:02.308973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:42.548 [2024-11-05 18:01:02.308988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:26:42.548 [2024-11-05 18:01:02.308994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.310446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.548 [2024-11-05 18:01:02.310469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:42.548 [2024-11-05 18:01:02.310478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:26:42.548 [2024-11-05 18:01:02.310483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.311687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.548 [2024-11-05 18:01:02.311716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:42.548 [2024-11-05 18:01:02.311725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:26:42.548 [2024-11-05 18:01:02.311731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.312897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.548 [2024-11-05 18:01:02.312979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:42.548 [2024-11-05 18:01:02.313021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:26:42.548 [2024-11-05 18:01:02.313038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.548 [2024-11-05 18:01:02.313087] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:42.548 [2024-11-05 18:01:02.313114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:42.548 [2024-11-05 18:01:02.313768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.313994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:42.549 [2024-11-05 18:01:02.314144] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:42.549 [2024-11-05 18:01:02.314164] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b68b3d0c-2388-4054-9d20-b1208a1cd419 00:26:42.549 [2024-11-05 18:01:02.314171] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:42.549 [2024-11-05 18:01:02.314178] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:42.549 [2024-11-05 18:01:02.314184] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:42.549 [2024-11-05 18:01:02.314192] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:42.549 [2024-11-05 18:01:02.314197] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:42.549 [2024-11-05 18:01:02.314207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:42.549 [2024-11-05 18:01:02.314214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:42.549 [2024-11-05 18:01:02.314221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:42.549 [2024-11-05 18:01:02.314226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:42.549 [2024-11-05 18:01:02.314233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.549 [2024-11-05 18:01:02.314239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:42.549 [2024-11-05 18:01:02.314247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.150 ms 00:26:42.549 [2024-11-05 18:01:02.314253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.549 [2024-11-05 18:01:02.315996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.549 [2024-11-05 18:01:02.316097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:42.549 [2024-11-05 18:01:02.316112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:26:42.549 [2024-11-05 18:01:02.316120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.549 [2024-11-05 18:01:02.316211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.549 [2024-11-05 18:01:02.316218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:42.549 [2024-11-05 18:01:02.316227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:42.549 [2024-11-05 18:01:02.316233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.549 [2024-11-05 18:01:02.322199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.549 [2024-11-05 18:01:02.322224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:42.549 [2024-11-05 18:01:02.322237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.549 [2024-11-05 18:01:02.322243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.549 [2024-11-05 18:01:02.322295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.549 [2024-11-05 18:01:02.322301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:42.549 [2024-11-05 18:01:02.322311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.549 [2024-11-05 18:01:02.322317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.549 [2024-11-05 18:01:02.322381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.549 [2024-11-05 18:01:02.322390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:42.549 [2024-11-05 18:01:02.322398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.549 [2024-11-05 18:01:02.322405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.549 [2024-11-05 18:01:02.322420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.549 [2024-11-05 18:01:02.322427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:42.549 [2024-11-05 18:01:02.322434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.549 [2024-11-05 18:01:02.322440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.549 [2024-11-05 18:01:02.333777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.549 [2024-11-05 18:01:02.333816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:42.549 [2024-11-05 18:01:02.333827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.549 [2024-11-05 18:01:02.333836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.549 [2024-11-05 18:01:02.342704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.549 [2024-11-05 18:01:02.342741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.549 [2024-11-05 18:01:02.342752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.549 [2024-11-05 18:01:02.342758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.549 [2024-11-05 18:01:02.342844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.550 [2024-11-05 18:01:02.342853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:42.550 [2024-11-05 18:01:02.342862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.550 [2024-11-05 18:01:02.342867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.550 [2024-11-05 18:01:02.342901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.550 [2024-11-05 18:01:02.342909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:42.550 [2024-11-05 18:01:02.342916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.550 [2024-11-05 18:01:02.342922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.550 [2024-11-05 18:01:02.342981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.550 [2024-11-05 18:01:02.342989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:42.550 [2024-11-05 18:01:02.342997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.550 [2024-11-05 18:01:02.343003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.550 [2024-11-05 18:01:02.343029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.550 [2024-11-05 18:01:02.343038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:42.550 [2024-11-05 18:01:02.343045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.550 [2024-11-05 18:01:02.343052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.550 [2024-11-05 18:01:02.343180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.550 [2024-11-05 18:01:02.343189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:42.550 [2024-11-05 18:01:02.343198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.550 [2024-11-05 18:01:02.343203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.550 [2024-11-05 18:01:02.343247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.550 [2024-11-05 18:01:02.343256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:42.550 [2024-11-05 18:01:02.343264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.550 [2024-11-05 18:01:02.343270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.550 [2024-11-05 18:01:02.343386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.460 ms, result 0 00:26:42.550 true 00:26:42.550 18:01:02 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92482 00:26:42.550 18:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # '[' -z 92482 ']' 00:26:42.550 18:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # kill -0 92482 00:26:42.550 18:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@957 -- # uname 00:26:42.550 18:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:42.550 18:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92482 00:26:42.550 18:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:42.550 18:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:42.550 killing process with pid 92482 00:26:42.550 18:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92482' 00:26:42.550 18:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@971 -- # kill 92482 00:26:42.550 18:01:02 ftl.ftl_restore_fast -- common/autotest_common.sh@976 -- # wait 92482 00:26:47.826 18:01:07 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:51.109 262144+0 records in 00:26:51.109 262144+0 records out 00:26:51.109 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.56251 s, 301 MB/s 00:26:51.109 18:01:10 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:53.019 18:01:12 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:53.019 [2024-11-05 18:01:12.859160] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:26:53.019 [2024-11-05 18:01:12.859293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92685 ] 00:26:53.019 [2024-11-05 18:01:12.988938] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:53.019 [2024-11-05 18:01:13.011588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.277 [2024-11-05 18:01:13.036294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.277 [2024-11-05 18:01:13.138433] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:53.277 [2024-11-05 18:01:13.138690] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:53.537 [2024-11-05 18:01:13.293166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.537 [2024-11-05 18:01:13.293344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:53.537 [2024-11-05 18:01:13.293365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:53.537 [2024-11-05 18:01:13.293380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.537 [2024-11-05 18:01:13.293432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.537 [2024-11-05 18:01:13.293442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:53.537 [2024-11-05 18:01:13.293451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:53.537 [2024-11-05 18:01:13.293459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.537 [2024-11-05 18:01:13.293484] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:53.537 [2024-11-05 18:01:13.293701] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:53.537 [2024-11-05 18:01:13.293715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.537 [2024-11-05 18:01:13.293723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:53.537 [2024-11-05 18:01:13.293737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:26:53.537 [2024-11-05 18:01:13.293744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.537 [2024-11-05 18:01:13.295092] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:53.537 [2024-11-05 18:01:13.297748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.537 [2024-11-05 18:01:13.297782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:53.537 [2024-11-05 18:01:13.297798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:26:53.537 [2024-11-05 18:01:13.297812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.537 [2024-11-05 18:01:13.297874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.537 [2024-11-05 18:01:13.297884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:53.537 [2024-11-05 18:01:13.297897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:53.537 [2024-11-05 18:01:13.297905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.537 [2024-11-05 18:01:13.304238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.537 [2024-11-05 18:01:13.304382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:53.537 [2024-11-05 18:01:13.304403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.278 ms 00:26:53.537 [2024-11-05 18:01:13.304411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.537 [2024-11-05 18:01:13.304504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.537 [2024-11-05 18:01:13.304515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:53.537 [2024-11-05 18:01:13.304524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:53.537 [2024-11-05 18:01:13.304535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.537 [2024-11-05 18:01:13.304583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.537 [2024-11-05 18:01:13.304593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:53.537 [2024-11-05 18:01:13.304602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:53.537 [2024-11-05 18:01:13.304618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.537 [2024-11-05 18:01:13.304642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:53.537 [2024-11-05 18:01:13.306260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.537 [2024-11-05 18:01:13.306286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:53.537 [2024-11-05 18:01:13.306296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.623 ms 00:26:53.537 [2024-11-05 18:01:13.306304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.537 [2024-11-05 18:01:13.306339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.537 [2024-11-05 18:01:13.306347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:53.537 [2024-11-05 18:01:13.306355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:53.537 [2024-11-05 18:01:13.306366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.537 [2024-11-05 18:01:13.306394] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:53.537 [2024-11-05 18:01:13.306414] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:53.538 [2024-11-05 18:01:13.306453] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:53.538 [2024-11-05 18:01:13.306473] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:53.538 [2024-11-05 18:01:13.306581] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:53.538 [2024-11-05 18:01:13.306592] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:53.538 [2024-11-05 18:01:13.306606] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:53.538 [2024-11-05 18:01:13.306616] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:53.538 [2024-11-05 18:01:13.306625] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:53.538 [2024-11-05 18:01:13.306633] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:53.538 [2024-11-05 18:01:13.306640] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:53.538 [2024-11-05 18:01:13.306646] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:53.538 [2024-11-05 18:01:13.306653] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:53.538 [2024-11-05 18:01:13.306661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.538 [2024-11-05 18:01:13.306669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:53.538 [2024-11-05 18:01:13.306678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:26:53.538 [2024-11-05 18:01:13.306687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.538 [2024-11-05 18:01:13.306771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.538 [2024-11-05 18:01:13.306780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:53.538 [2024-11-05 18:01:13.306788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:53.538 [2024-11-05 18:01:13.306806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.538 [2024-11-05 18:01:13.306910] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:53.538 [2024-11-05 18:01:13.306926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:53.538 [2024-11-05 18:01:13.306935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:53.538 [2024-11-05 18:01:13.306943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.538 [2024-11-05 18:01:13.306953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:53.538 [2024-11-05 18:01:13.306961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:53.538 [2024-11-05 18:01:13.306969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:53.538 [2024-11-05 18:01:13.306978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:53.538 [2024-11-05 18:01:13.306993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:53.538 [2024-11-05 18:01:13.307001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:53.538 [2024-11-05 18:01:13.307010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:53.538 [2024-11-05 18:01:13.307018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:53.538 [2024-11-05 18:01:13.307026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:53.538 [2024-11-05 18:01:13.307035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:53.538 [2024-11-05 18:01:13.307043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:53.538 [2024-11-05 18:01:13.307050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.538 [2024-11-05 18:01:13.307058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:53.538 [2024-11-05 18:01:13.307101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:53.538 [2024-11-05 18:01:13.307109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.538 [2024-11-05 18:01:13.307117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:53.538 [2024-11-05 18:01:13.307126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:53.538 [2024-11-05 18:01:13.307134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.538 [2024-11-05 18:01:13.307142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:53.538 [2024-11-05 18:01:13.307150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:53.538 [2024-11-05 18:01:13.307158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.538 [2024-11-05 18:01:13.307167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:53.538 [2024-11-05 18:01:13.307184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:53.538 [2024-11-05 18:01:13.307191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.538 [2024-11-05 18:01:13.307200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:53.538 [2024-11-05 18:01:13.307208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:53.538 [2024-11-05 18:01:13.307216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.538 [2024-11-05 18:01:13.307223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:53.538 [2024-11-05 18:01:13.307230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:53.538 [2024-11-05 18:01:13.307238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:53.538 [2024-11-05 18:01:13.307246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:53.538 [2024-11-05 18:01:13.307254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:53.538 [2024-11-05 18:01:13.307261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:53.538 [2024-11-05 18:01:13.307268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:53.538 [2024-11-05 18:01:13.307276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:53.538 [2024-11-05 18:01:13.307284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.538 [2024-11-05 18:01:13.307292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:53.538 [2024-11-05 18:01:13.307300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:53.538 [2024-11-05 18:01:13.307309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.538 [2024-11-05 18:01:13.307317] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:53.538 [2024-11-05 18:01:13.307327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:53.538 [2024-11-05 18:01:13.307335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:53.538 [2024-11-05 18:01:13.307343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.538 [2024-11-05 18:01:13.307351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:53.538 [2024-11-05 18:01:13.307357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:53.538 [2024-11-05 18:01:13.307364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:53.538 [2024-11-05 18:01:13.307370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:53.538 [2024-11-05 18:01:13.307377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:53.538 [2024-11-05 18:01:13.307384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:53.538 [2024-11-05 18:01:13.307392] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:53.538 [2024-11-05 18:01:13.307401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.538 [2024-11-05 18:01:13.307409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:53.538 [2024-11-05 18:01:13.307417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:53.538 [2024-11-05 18:01:13.307424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:53.538 [2024-11-05 18:01:13.307435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:53.538 [2024-11-05 18:01:13.307443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:53.538 [2024-11-05 18:01:13.307451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:53.538 [2024-11-05 18:01:13.307458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:53.538 [2024-11-05 18:01:13.307465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:53.538 [2024-11-05 18:01:13.307472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:53.538 [2024-11-05 18:01:13.307480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:53.538 [2024-11-05 18:01:13.307487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:53.538 [2024-11-05 18:01:13.307495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:53.538 [2024-11-05 18:01:13.307502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:53.538 [2024-11-05 18:01:13.307509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:53.538 [2024-11-05 18:01:13.307516] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:53.538 [2024-11-05 18:01:13.307524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.538 [2024-11-05 18:01:13.307533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:53.538 [2024-11-05 18:01:13.307540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:53.538 [2024-11-05 18:01:13.307547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:53.538 [2024-11-05 18:01:13.307556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:53.538 [2024-11-05 18:01:13.307564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.538 [2024-11-05 18:01:13.307572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:53.538 [2024-11-05 18:01:13.307580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:26:53.538 [2024-11-05 18:01:13.307589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.319040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.319088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:53.539 [2024-11-05 18:01:13.319100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.405 ms 00:26:53.539 [2024-11-05 18:01:13.319110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.319196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.319205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:53.539 [2024-11-05 18:01:13.319214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:53.539 [2024-11-05 18:01:13.319222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.336055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.336111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:53.539 [2024-11-05 18:01:13.336126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.783 ms 00:26:53.539 [2024-11-05 18:01:13.336137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.336184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.336196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:53.539 [2024-11-05 18:01:13.336208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:53.539 [2024-11-05 18:01:13.336222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.336686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.336706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:53.539 [2024-11-05 18:01:13.336719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:26:53.539 [2024-11-05 18:01:13.336737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.336898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.336909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:53.539 [2024-11-05 18:01:13.336920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:26:53.539 [2024-11-05 18:01:13.336929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.343981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.344155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:53.539 [2024-11-05 18:01:13.344175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.029 ms 00:26:53.539 [2024-11-05 18:01:13.344186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.347173] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:53.539 [2024-11-05 18:01:13.347315] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:53.539 [2024-11-05 18:01:13.347339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.347351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:53.539 [2024-11-05 18:01:13.347362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.046 ms 00:26:53.539 [2024-11-05 18:01:13.347370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.362148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.362286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:53.539 [2024-11-05 18:01:13.362302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.739 ms 00:26:53.539 [2024-11-05 18:01:13.362316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.364010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.364042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:53.539 [2024-11-05 18:01:13.364052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.658 ms 00:26:53.539 [2024-11-05 18:01:13.364059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.365718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.365857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:53.539 [2024-11-05 18:01:13.365874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:26:53.539 [2024-11-05 18:01:13.365892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.366247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.366269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:53.539 [2024-11-05 18:01:13.366278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:26:53.539 [2024-11-05 18:01:13.366286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.385370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.385424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:53.539 [2024-11-05 18:01:13.385436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.068 ms 00:26:53.539 [2024-11-05 18:01:13.385450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.393142] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:53.539 [2024-11-05 18:01:13.396186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.396329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:53.539 [2024-11-05 18:01:13.396353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.696 ms 00:26:53.539 [2024-11-05 18:01:13.396361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.396443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.396455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:53.539 [2024-11-05 18:01:13.396464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:53.539 [2024-11-05 18:01:13.396471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.396542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.396553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:53.539 [2024-11-05 18:01:13.396561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:53.539 [2024-11-05 18:01:13.396572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.396598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.396607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:53.539 [2024-11-05 18:01:13.396615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:53.539 [2024-11-05 18:01:13.396627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.396664] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:53.539 [2024-11-05 18:01:13.396675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.396683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:53.539 [2024-11-05 18:01:13.396691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:53.539 [2024-11-05 18:01:13.396698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.400478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.400510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:53.539 [2024-11-05 18:01:13.400527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.757 ms 00:26:53.539 [2024-11-05 18:01:13.400535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.400608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.539 [2024-11-05 18:01:13.400618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:53.539 [2024-11-05 18:01:13.400631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:53.539 [2024-11-05 18:01:13.400641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.539 [2024-11-05 18:01:13.401709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.130 ms, result 0 00:26:54.478  [2024-11-05T18:01:15.851Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-05T18:01:16.420Z] Copying: 64/1024 [MB] (32 MBps) [2024-11-05T18:01:17.805Z] Copying: 99/1024 [MB] (35 MBps) [2024-11-05T18:01:18.745Z] Copying: 131/1024 [MB] (31 MBps) [2024-11-05T18:01:19.701Z] Copying: 158/1024 [MB] (26 MBps) [2024-11-05T18:01:20.642Z] Copying: 182/1024 [MB] (23 MBps) [2024-11-05T18:01:21.584Z] Copying: 208/1024 [MB] (26 MBps) [2024-11-05T18:01:22.525Z] Copying: 231/1024 [MB] (22 MBps) [2024-11-05T18:01:23.464Z] Copying: 256/1024 [MB] (24 MBps) [2024-11-05T18:01:24.844Z] Copying: 276/1024 [MB] (20 MBps) [2024-11-05T18:01:25.414Z] Copying: 289/1024 [MB] (13 MBps) [2024-11-05T18:01:26.797Z] Copying: 310/1024 [MB] (21 MBps) [2024-11-05T18:01:27.739Z] Copying: 329/1024 [MB] (18 MBps) [2024-11-05T18:01:28.684Z] Copying: 347/1024 [MB] (18 MBps) [2024-11-05T18:01:29.628Z] Copying: 362/1024 [MB] (14 MBps) [2024-11-05T18:01:30.570Z] Copying: 378/1024 [MB] (16 MBps) [2024-11-05T18:01:31.515Z] Copying: 391/1024 [MB] (12 MBps) [2024-11-05T18:01:32.457Z] Copying: 404/1024 [MB] (12 MBps) [2024-11-05T18:01:33.840Z] Copying: 415/1024 [MB] (11 MBps) [2024-11-05T18:01:34.413Z] Copying: 426/1024 [MB] (11 MBps) [2024-11-05T18:01:35.799Z] Copying: 437/1024 [MB] (10 MBps) [2024-11-05T18:01:36.742Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-05T18:01:37.687Z] Copying: 460/1024 [MB] (11 MBps) [2024-11-05T18:01:38.632Z] Copying: 471/1024 [MB] (11 MBps) [2024-11-05T18:01:39.574Z] Copying: 484/1024 [MB] (12 MBps) [2024-11-05T18:01:40.515Z] Copying: 495/1024 [MB] (10 MBps) [2024-11-05T18:01:41.459Z] Copying: 506/1024 [MB] (10 MBps) [2024-11-05T18:01:42.845Z] Copying: 516/1024 [MB] (10 MBps) [2024-11-05T18:01:43.437Z] Copying: 530/1024 [MB] (13 MBps) [2024-11-05T18:01:44.826Z] Copying: 543/1024 [MB] (13 MBps) [2024-11-05T18:01:45.770Z] Copying: 562/1024 [MB] (18 MBps) [2024-11-05T18:01:46.716Z] Copying: 574/1024 [MB] (12 MBps) [2024-11-05T18:01:47.654Z] Copying: 586/1024 [MB] (11 MBps) [2024-11-05T18:01:48.596Z] Copying: 617/1024 [MB] (30 MBps) [2024-11-05T18:01:49.540Z] Copying: 641/1024 [MB] (24 MBps) [2024-11-05T18:01:50.482Z] Copying: 661/1024 [MB] (20 MBps) [2024-11-05T18:01:51.424Z] Copying: 680/1024 [MB] (18 MBps) [2024-11-05T18:01:52.808Z] Copying: 696/1024 [MB] (15 MBps) [2024-11-05T18:01:53.751Z] Copying: 713/1024 [MB] (17 MBps) [2024-11-05T18:01:54.693Z] Copying: 733/1024 [MB] (19 MBps) [2024-11-05T18:01:55.638Z] Copying: 755/1024 [MB] (22 MBps) [2024-11-05T18:01:56.581Z] Copying: 777/1024 [MB] (21 MBps) [2024-11-05T18:01:57.523Z] Copying: 796/1024 [MB] (19 MBps) [2024-11-05T18:01:58.466Z] Copying: 818/1024 [MB] (21 MBps) [2024-11-05T18:01:59.850Z] Copying: 842/1024 [MB] (23 MBps) [2024-11-05T18:02:00.434Z] Copying: 863/1024 [MB] (21 MBps) [2024-11-05T18:02:01.819Z] Copying: 888/1024 [MB] (25 MBps) [2024-11-05T18:02:02.761Z] Copying: 913/1024 [MB] (24 MBps) [2024-11-05T18:02:03.703Z] Copying: 937/1024 [MB] (24 MBps) [2024-11-05T18:02:04.647Z] Copying: 954/1024 [MB] (17 MBps) [2024-11-05T18:02:05.588Z] Copying: 974/1024 [MB] (19 MBps) [2024-11-05T18:02:06.544Z] Copying: 993/1024 [MB] (18 MBps) [2024-11-05T18:02:07.489Z] Copying: 1007/1024 [MB] (14 MBps) [2024-11-05T18:02:07.489Z] Copying: 1023/1024 [MB] (16 MBps) [2024-11-05T18:02:07.489Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-05 18:02:07.426436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.498 [2024-11-05 18:02:07.426483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:47.498 [2024-11-05 18:02:07.426503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:47.498 [2024-11-05 18:02:07.426512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.498 [2024-11-05 18:02:07.426542] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:47.498 [2024-11-05 18:02:07.427127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.498 [2024-11-05 18:02:07.427146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:47.498 [2024-11-05 18:02:07.427156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:27:47.498 [2024-11-05 18:02:07.427164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.498 [2024-11-05 18:02:07.428946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.498 [2024-11-05 18:02:07.429091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:47.498 [2024-11-05 18:02:07.429107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.764 ms 00:27:47.498 [2024-11-05 18:02:07.429116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.498 [2024-11-05 18:02:07.429149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.498 [2024-11-05 18:02:07.429158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:47.498 [2024-11-05 18:02:07.429167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:47.498 [2024-11-05 18:02:07.429174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.498 [2024-11-05 18:02:07.429221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.498 [2024-11-05 18:02:07.429230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:47.498 [2024-11-05 18:02:07.429237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:47.498 [2024-11-05 18:02:07.429250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.498 [2024-11-05 18:02:07.429262] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:47.498 [2024-11-05 18:02:07.429275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:47.498 [2024-11-05 18:02:07.429587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.429994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.430001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.430008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.430016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.430023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:47.499 [2024-11-05 18:02:07.430038] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:47.499 [2024-11-05 18:02:07.430045] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b68b3d0c-2388-4054-9d20-b1208a1cd419 00:27:47.499 [2024-11-05 18:02:07.430057] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:47.499 [2024-11-05 18:02:07.430075] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:47.499 [2024-11-05 18:02:07.430083] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:47.499 [2024-11-05 18:02:07.430090] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:47.499 [2024-11-05 18:02:07.430097] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:47.499 [2024-11-05 18:02:07.430105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:47.499 [2024-11-05 18:02:07.430117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:47.499 [2024-11-05 18:02:07.430124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:47.499 [2024-11-05 18:02:07.430130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:47.499 [2024-11-05 18:02:07.430137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.499 [2024-11-05 18:02:07.430145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:47.499 [2024-11-05 18:02:07.430154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.876 ms 00:27:47.499 [2024-11-05 18:02:07.430163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.499 [2024-11-05 18:02:07.431904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.499 [2024-11-05 18:02:07.431927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:47.499 [2024-11-05 18:02:07.431938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.724 ms 00:27:47.499 [2024-11-05 18:02:07.431952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.499 [2024-11-05 18:02:07.432049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.499 [2024-11-05 18:02:07.432061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:47.499 [2024-11-05 18:02:07.432082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:47.499 [2024-11-05 18:02:07.432090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.499 [2024-11-05 18:02:07.438011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.499 [2024-11-05 18:02:07.438036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:47.499 [2024-11-05 18:02:07.438045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.499 [2024-11-05 18:02:07.438053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.499 [2024-11-05 18:02:07.438120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.499 [2024-11-05 18:02:07.438129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:47.499 [2024-11-05 18:02:07.438141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.499 [2024-11-05 18:02:07.438149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.499 [2024-11-05 18:02:07.438192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.499 [2024-11-05 18:02:07.438201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:47.499 [2024-11-05 18:02:07.438208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.499 [2024-11-05 18:02:07.438216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.499 [2024-11-05 18:02:07.438231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.499 [2024-11-05 18:02:07.438239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:47.499 [2024-11-05 18:02:07.438246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.500 [2024-11-05 18:02:07.438256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.500 [2024-11-05 18:02:07.449365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.500 [2024-11-05 18:02:07.449401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:47.500 [2024-11-05 18:02:07.449412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.500 [2024-11-05 18:02:07.449420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.500 [2024-11-05 18:02:07.458166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.500 [2024-11-05 18:02:07.458201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:47.500 [2024-11-05 18:02:07.458217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.500 [2024-11-05 18:02:07.458225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.500 [2024-11-05 18:02:07.458273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.500 [2024-11-05 18:02:07.458283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:47.500 [2024-11-05 18:02:07.458290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.500 [2024-11-05 18:02:07.458298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.500 [2024-11-05 18:02:07.458323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.500 [2024-11-05 18:02:07.458337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:47.500 [2024-11-05 18:02:07.458345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.500 [2024-11-05 18:02:07.458352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.500 [2024-11-05 18:02:07.458405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.500 [2024-11-05 18:02:07.458415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:47.500 [2024-11-05 18:02:07.458423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.500 [2024-11-05 18:02:07.458431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.500 [2024-11-05 18:02:07.458461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.500 [2024-11-05 18:02:07.458471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:47.500 [2024-11-05 18:02:07.458478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.500 [2024-11-05 18:02:07.458486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.500 [2024-11-05 18:02:07.458526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.500 [2024-11-05 18:02:07.458536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:47.500 [2024-11-05 18:02:07.458544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.500 [2024-11-05 18:02:07.458555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.500 [2024-11-05 18:02:07.458596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.500 [2024-11-05 18:02:07.458606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:47.500 [2024-11-05 18:02:07.458614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.500 [2024-11-05 18:02:07.458621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.500 [2024-11-05 18:02:07.458757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 32.286 ms, result 0 00:27:48.073 00:27:48.073 00:27:48.073 18:02:08 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:48.334 [2024-11-05 18:02:08.069110] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:27:48.334 [2024-11-05 18:02:08.069230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93252 ] 00:27:48.334 [2024-11-05 18:02:08.198903] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:48.334 [2024-11-05 18:02:08.229418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.334 [2024-11-05 18:02:08.248698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.596 [2024-11-05 18:02:08.336771] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:48.596 [2024-11-05 18:02:08.336833] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:48.596 [2024-11-05 18:02:08.494393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.596 [2024-11-05 18:02:08.494594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:48.597 [2024-11-05 18:02:08.494614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:48.597 [2024-11-05 18:02:08.494629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.494688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.597 [2024-11-05 18:02:08.494699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:48.597 [2024-11-05 18:02:08.494711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:48.597 [2024-11-05 18:02:08.494718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.494744] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:48.597 [2024-11-05 18:02:08.494993] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:48.597 [2024-11-05 18:02:08.495008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.597 [2024-11-05 18:02:08.495015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:48.597 [2024-11-05 18:02:08.495026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:27:48.597 [2024-11-05 18:02:08.495033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.495292] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:48.597 [2024-11-05 18:02:08.495314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.597 [2024-11-05 18:02:08.495326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:48.597 [2024-11-05 18:02:08.495336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:48.597 [2024-11-05 18:02:08.495350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.495457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.597 [2024-11-05 18:02:08.495476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:48.597 [2024-11-05 18:02:08.495487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:48.597 [2024-11-05 18:02:08.495495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.495730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.597 [2024-11-05 18:02:08.495746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:48.597 [2024-11-05 18:02:08.495756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:27:48.597 [2024-11-05 18:02:08.495764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.495843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.597 [2024-11-05 18:02:08.495852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:48.597 [2024-11-05 18:02:08.495860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:48.597 [2024-11-05 18:02:08.495867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.495888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.597 [2024-11-05 18:02:08.495896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:48.597 [2024-11-05 18:02:08.495907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:48.597 [2024-11-05 18:02:08.495920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.495937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:48.597 [2024-11-05 18:02:08.497433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.597 [2024-11-05 18:02:08.497455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:48.597 [2024-11-05 18:02:08.497465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:27:48.597 [2024-11-05 18:02:08.497474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.497507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.597 [2024-11-05 18:02:08.497516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:48.597 [2024-11-05 18:02:08.497525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:48.597 [2024-11-05 18:02:08.497533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.497551] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:48.597 [2024-11-05 18:02:08.497571] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:48.597 [2024-11-05 18:02:08.497612] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:48.597 [2024-11-05 18:02:08.497629] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:48.597 [2024-11-05 18:02:08.497735] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:48.597 [2024-11-05 18:02:08.497785] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:48.597 [2024-11-05 18:02:08.497798] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:48.597 [2024-11-05 18:02:08.497811] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:48.597 [2024-11-05 18:02:08.497823] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:48.597 [2024-11-05 18:02:08.497832] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:48.597 [2024-11-05 18:02:08.497841] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:48.597 [2024-11-05 18:02:08.497849] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:48.597 [2024-11-05 18:02:08.497857] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:48.597 [2024-11-05 18:02:08.497866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.597 [2024-11-05 18:02:08.497874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:48.597 [2024-11-05 18:02:08.497882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:27:48.597 [2024-11-05 18:02:08.497894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.497977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.597 [2024-11-05 18:02:08.497988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:48.597 [2024-11-05 18:02:08.497999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:48.597 [2024-11-05 18:02:08.498007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.597 [2024-11-05 18:02:08.498134] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:48.597 [2024-11-05 18:02:08.498150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:48.597 [2024-11-05 18:02:08.498160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:48.597 [2024-11-05 18:02:08.498169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:48.597 [2024-11-05 18:02:08.498178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:48.597 [2024-11-05 18:02:08.498186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:48.597 [2024-11-05 18:02:08.498195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:48.597 [2024-11-05 18:02:08.498203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:48.597 [2024-11-05 18:02:08.498211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:48.597 [2024-11-05 18:02:08.498218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:48.597 [2024-11-05 18:02:08.498231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:48.597 [2024-11-05 18:02:08.498239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:48.597 [2024-11-05 18:02:08.498247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:48.597 [2024-11-05 18:02:08.498255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:48.597 [2024-11-05 18:02:08.498262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:48.597 [2024-11-05 18:02:08.498270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:48.597 [2024-11-05 18:02:08.498277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:48.597 [2024-11-05 18:02:08.498288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:48.597 [2024-11-05 18:02:08.498296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:48.597 [2024-11-05 18:02:08.498304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:48.597 [2024-11-05 18:02:08.498312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:48.597 [2024-11-05 18:02:08.498320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:48.597 [2024-11-05 18:02:08.498327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:48.597 [2024-11-05 18:02:08.498335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:48.597 [2024-11-05 18:02:08.498342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:48.597 [2024-11-05 18:02:08.498350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:48.597 [2024-11-05 18:02:08.498357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:48.597 [2024-11-05 18:02:08.498365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:48.597 [2024-11-05 18:02:08.498372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:48.597 [2024-11-05 18:02:08.498379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:48.597 [2024-11-05 18:02:08.498386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:48.597 [2024-11-05 18:02:08.498394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:48.597 [2024-11-05 18:02:08.498402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:48.597 [2024-11-05 18:02:08.498413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:48.597 [2024-11-05 18:02:08.498422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:48.597 [2024-11-05 18:02:08.498429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:48.597 [2024-11-05 18:02:08.498437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:48.597 [2024-11-05 18:02:08.498450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:48.597 [2024-11-05 18:02:08.498458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:48.597 [2024-11-05 18:02:08.498465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:48.597 [2024-11-05 18:02:08.498473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:48.597 [2024-11-05 18:02:08.498481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:48.598 [2024-11-05 18:02:08.498488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:48.598 [2024-11-05 18:02:08.498495] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:48.598 [2024-11-05 18:02:08.498507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:48.598 [2024-11-05 18:02:08.498515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:48.598 [2024-11-05 18:02:08.498525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:48.598 [2024-11-05 18:02:08.498537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:48.598 [2024-11-05 18:02:08.498545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:48.598 [2024-11-05 18:02:08.498555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:48.598 [2024-11-05 18:02:08.498563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:48.598 [2024-11-05 18:02:08.498571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:48.598 [2024-11-05 18:02:08.498578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:48.598 [2024-11-05 18:02:08.498588] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:48.598 [2024-11-05 18:02:08.498597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:48.598 [2024-11-05 18:02:08.498607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:48.598 [2024-11-05 18:02:08.498615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:48.598 [2024-11-05 18:02:08.498623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:48.598 [2024-11-05 18:02:08.498631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:48.598 [2024-11-05 18:02:08.498639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:48.598 [2024-11-05 18:02:08.498647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:48.598 [2024-11-05 18:02:08.498655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:48.598 [2024-11-05 18:02:08.498663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:48.598 [2024-11-05 18:02:08.498671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:48.598 [2024-11-05 18:02:08.498679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:48.598 [2024-11-05 18:02:08.498689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:48.598 [2024-11-05 18:02:08.498697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:48.598 [2024-11-05 18:02:08.498706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:48.598 [2024-11-05 18:02:08.498714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:48.598 [2024-11-05 18:02:08.498722] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:48.598 [2024-11-05 18:02:08.498731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:48.598 [2024-11-05 18:02:08.498740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:48.598 [2024-11-05 18:02:08.498748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:48.598 [2024-11-05 18:02:08.498757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:48.598 [2024-11-05 18:02:08.498765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:48.598 [2024-11-05 18:02:08.498773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.498802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:48.598 [2024-11-05 18:02:08.498811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:27:48.598 [2024-11-05 18:02:08.498819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.505162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.505185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:48.598 [2024-11-05 18:02:08.505193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.288 ms 00:27:48.598 [2024-11-05 18:02:08.505201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.505276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.505283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:48.598 [2024-11-05 18:02:08.505290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:48.598 [2024-11-05 18:02:08.505299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.527912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.528269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:48.598 [2024-11-05 18:02:08.528331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.568 ms 00:27:48.598 [2024-11-05 18:02:08.528353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.528450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.528476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:48.598 [2024-11-05 18:02:08.528498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:48.598 [2024-11-05 18:02:08.528518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.528741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.528798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:48.598 [2024-11-05 18:02:08.528824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:27:48.598 [2024-11-05 18:02:08.528847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.529176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.529215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:48.598 [2024-11-05 18:02:08.529237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:27:48.598 [2024-11-05 18:02:08.529266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.535355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.535465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:48.598 [2024-11-05 18:02:08.535485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.045 ms 00:27:48.598 [2024-11-05 18:02:08.535493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.535593] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:48.598 [2024-11-05 18:02:08.535605] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:48.598 [2024-11-05 18:02:08.535618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.535626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:48.598 [2024-11-05 18:02:08.535634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:48.598 [2024-11-05 18:02:08.535647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.547922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.547948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:48.598 [2024-11-05 18:02:08.547963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.260 ms 00:27:48.598 [2024-11-05 18:02:08.547972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.548098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.548107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:48.598 [2024-11-05 18:02:08.548116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:27:48.598 [2024-11-05 18:02:08.548126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.548168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.548179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:48.598 [2024-11-05 18:02:08.548187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:48.598 [2024-11-05 18:02:08.548194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.548485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.548494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:48.598 [2024-11-05 18:02:08.548506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:27:48.598 [2024-11-05 18:02:08.548513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.548530] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:48.598 [2024-11-05 18:02:08.548539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.548547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:48.598 [2024-11-05 18:02:08.548559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:48.598 [2024-11-05 18:02:08.548566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.556541] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:48.598 [2024-11-05 18:02:08.556674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.556683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:48.598 [2024-11-05 18:02:08.556692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.091 ms 00:27:48.598 [2024-11-05 18:02:08.556703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.559217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.559243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:48.598 [2024-11-05 18:02:08.559253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.494 ms 00:27:48.598 [2024-11-05 18:02:08.559265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.598 [2024-11-05 18:02:08.559330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.598 [2024-11-05 18:02:08.559340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:48.599 [2024-11-05 18:02:08.559352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:48.599 [2024-11-05 18:02:08.559359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.599 [2024-11-05 18:02:08.559394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.599 [2024-11-05 18:02:08.559403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:48.599 [2024-11-05 18:02:08.559411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:48.599 [2024-11-05 18:02:08.559418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.599 [2024-11-05 18:02:08.559447] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:48.599 [2024-11-05 18:02:08.559456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.599 [2024-11-05 18:02:08.559463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:48.599 [2024-11-05 18:02:08.559471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:48.599 [2024-11-05 18:02:08.559478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.599 [2024-11-05 18:02:08.564091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.599 [2024-11-05 18:02:08.564123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:48.599 [2024-11-05 18:02:08.564133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.591 ms 00:27:48.599 [2024-11-05 18:02:08.564140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.599 [2024-11-05 18:02:08.564208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.599 [2024-11-05 18:02:08.564221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:48.599 [2024-11-05 18:02:08.564228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:48.599 [2024-11-05 18:02:08.564238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.599 [2024-11-05 18:02:08.565060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 70.302 ms, result 0 00:27:49.982  [2024-11-05T18:02:10.913Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-05T18:02:11.876Z] Copying: 40/1024 [MB] (20 MBps) [2024-11-05T18:02:12.823Z] Copying: 56/1024 [MB] (15 MBps) [2024-11-05T18:02:13.766Z] Copying: 70/1024 [MB] (13 MBps) [2024-11-05T18:02:15.153Z] Copying: 83/1024 [MB] (13 MBps) [2024-11-05T18:02:16.098Z] Copying: 101/1024 [MB] (18 MBps) [2024-11-05T18:02:17.045Z] Copying: 116/1024 [MB] (14 MBps) [2024-11-05T18:02:17.987Z] Copying: 137/1024 [MB] (21 MBps) [2024-11-05T18:02:18.929Z] Copying: 151/1024 [MB] (14 MBps) [2024-11-05T18:02:19.873Z] Copying: 167/1024 [MB] (15 MBps) [2024-11-05T18:02:20.816Z] Copying: 185/1024 [MB] (18 MBps) [2024-11-05T18:02:21.760Z] Copying: 199/1024 [MB] (13 MBps) [2024-11-05T18:02:23.148Z] Copying: 213/1024 [MB] (14 MBps) [2024-11-05T18:02:24.093Z] Copying: 227/1024 [MB] (13 MBps) [2024-11-05T18:02:25.038Z] Copying: 239/1024 [MB] (11 MBps) [2024-11-05T18:02:25.985Z] Copying: 254/1024 [MB] (15 MBps) [2024-11-05T18:02:26.927Z] Copying: 267/1024 [MB] (13 MBps) [2024-11-05T18:02:27.867Z] Copying: 285/1024 [MB] (17 MBps) [2024-11-05T18:02:28.814Z] Copying: 303/1024 [MB] (17 MBps) [2024-11-05T18:02:29.759Z] Copying: 317/1024 [MB] (14 MBps) [2024-11-05T18:02:31.147Z] Copying: 330/1024 [MB] (13 MBps) [2024-11-05T18:02:32.093Z] Copying: 345/1024 [MB] (14 MBps) [2024-11-05T18:02:33.037Z] Copying: 362/1024 [MB] (17 MBps) [2024-11-05T18:02:33.993Z] Copying: 376/1024 [MB] (13 MBps) [2024-11-05T18:02:34.963Z] Copying: 401/1024 [MB] (25 MBps) [2024-11-05T18:02:35.906Z] Copying: 412/1024 [MB] (11 MBps) [2024-11-05T18:02:36.849Z] Copying: 424/1024 [MB] (11 MBps) [2024-11-05T18:02:37.792Z] Copying: 436/1024 [MB] (12 MBps) [2024-11-05T18:02:38.735Z] Copying: 446/1024 [MB] (10 MBps) [2024-11-05T18:02:40.121Z] Copying: 457/1024 [MB] (10 MBps) [2024-11-05T18:02:41.062Z] Copying: 471/1024 [MB] (13 MBps) [2024-11-05T18:02:42.003Z] Copying: 493/1024 [MB] (22 MBps) [2024-11-05T18:02:42.947Z] Copying: 516/1024 [MB] (23 MBps) [2024-11-05T18:02:43.917Z] Copying: 529/1024 [MB] (13 MBps) [2024-11-05T18:02:44.857Z] Copying: 546/1024 [MB] (16 MBps) [2024-11-05T18:02:45.800Z] Copying: 565/1024 [MB] (19 MBps) [2024-11-05T18:02:46.743Z] Copying: 589/1024 [MB] (23 MBps) [2024-11-05T18:02:48.129Z] Copying: 612/1024 [MB] (23 MBps) [2024-11-05T18:02:49.072Z] Copying: 628/1024 [MB] (16 MBps) [2024-11-05T18:02:50.018Z] Copying: 649/1024 [MB] (21 MBps) [2024-11-05T18:02:50.961Z] Copying: 662/1024 [MB] (12 MBps) [2024-11-05T18:02:51.906Z] Copying: 673/1024 [MB] (11 MBps) [2024-11-05T18:02:52.850Z] Copying: 691/1024 [MB] (17 MBps) [2024-11-05T18:02:53.794Z] Copying: 711/1024 [MB] (19 MBps) [2024-11-05T18:02:54.738Z] Copying: 726/1024 [MB] (14 MBps) [2024-11-05T18:02:56.125Z] Copying: 740/1024 [MB] (14 MBps) [2024-11-05T18:02:57.069Z] Copying: 754/1024 [MB] (14 MBps) [2024-11-05T18:02:58.010Z] Copying: 765/1024 [MB] (10 MBps) [2024-11-05T18:02:58.994Z] Copying: 779/1024 [MB] (13 MBps) [2024-11-05T18:02:59.928Z] Copying: 813/1024 [MB] (34 MBps) [2024-11-05T18:03:00.862Z] Copying: 855/1024 [MB] (41 MBps) [2024-11-05T18:03:01.798Z] Copying: 897/1024 [MB] (42 MBps) [2024-11-05T18:03:02.738Z] Copying: 936/1024 [MB] (39 MBps) [2024-11-05T18:03:04.121Z] Copying: 958/1024 [MB] (21 MBps) [2024-11-05T18:03:05.063Z] Copying: 986/1024 [MB] (28 MBps) [2024-11-05T18:03:05.063Z] Copying: 1017/1024 [MB] (30 MBps) [2024-11-05T18:03:05.325Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-05 18:03:05.300496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.334 [2024-11-05 18:03:05.300562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:45.334 [2024-11-05 18:03:05.300577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:45.334 [2024-11-05 18:03:05.300590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.334 [2024-11-05 18:03:05.300610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:45.334 [2024-11-05 18:03:05.301090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.334 [2024-11-05 18:03:05.301108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:45.334 [2024-11-05 18:03:05.301117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:28:45.334 [2024-11-05 18:03:05.301125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.334 [2024-11-05 18:03:05.301340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.334 [2024-11-05 18:03:05.301350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:45.334 [2024-11-05 18:03:05.301360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:28:45.335 [2024-11-05 18:03:05.301368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.335 [2024-11-05 18:03:05.301399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.335 [2024-11-05 18:03:05.301408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:45.335 [2024-11-05 18:03:05.301417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:45.335 [2024-11-05 18:03:05.301425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.335 [2024-11-05 18:03:05.301478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.335 [2024-11-05 18:03:05.301488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:45.335 [2024-11-05 18:03:05.301497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:45.335 [2024-11-05 18:03:05.301505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.335 [2024-11-05 18:03:05.301519] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:45.335 [2024-11-05 18:03:05.301532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.301992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:45.335 [2024-11-05 18:03:05.302245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:45.336 [2024-11-05 18:03:05.302409] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:45.336 [2024-11-05 18:03:05.302419] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b68b3d0c-2388-4054-9d20-b1208a1cd419 00:28:45.336 [2024-11-05 18:03:05.302426] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:45.336 [2024-11-05 18:03:05.302433] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:45.336 [2024-11-05 18:03:05.302440] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:45.336 [2024-11-05 18:03:05.302447] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:45.336 [2024-11-05 18:03:05.302459] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:45.336 [2024-11-05 18:03:05.302467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:45.336 [2024-11-05 18:03:05.302474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:45.336 [2024-11-05 18:03:05.302481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:45.336 [2024-11-05 18:03:05.302487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:45.336 [2024-11-05 18:03:05.302494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.336 [2024-11-05 18:03:05.302505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:45.336 [2024-11-05 18:03:05.302513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:28:45.336 [2024-11-05 18:03:05.302520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.336 [2024-11-05 18:03:05.303985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.336 [2024-11-05 18:03:05.304008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:45.336 [2024-11-05 18:03:05.304018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.448 ms 00:28:45.336 [2024-11-05 18:03:05.304025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.336 [2024-11-05 18:03:05.304120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.336 [2024-11-05 18:03:05.304129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:45.336 [2024-11-05 18:03:05.304142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:28:45.336 [2024-11-05 18:03:05.304149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.336 [2024-11-05 18:03:05.309083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.336 [2024-11-05 18:03:05.309109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:45.336 [2024-11-05 18:03:05.309119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.336 [2024-11-05 18:03:05.309126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.336 [2024-11-05 18:03:05.309177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.336 [2024-11-05 18:03:05.309185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:45.336 [2024-11-05 18:03:05.309195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.336 [2024-11-05 18:03:05.309202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.336 [2024-11-05 18:03:05.309235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.336 [2024-11-05 18:03:05.309244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:45.336 [2024-11-05 18:03:05.309251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.336 [2024-11-05 18:03:05.309259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.336 [2024-11-05 18:03:05.309272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.336 [2024-11-05 18:03:05.309279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:45.336 [2024-11-05 18:03:05.309287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.336 [2024-11-05 18:03:05.309296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.336 [2024-11-05 18:03:05.320029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.336 [2024-11-05 18:03:05.320235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:45.336 [2024-11-05 18:03:05.320252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.336 [2024-11-05 18:03:05.320260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.597 [2024-11-05 18:03:05.329047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.597 [2024-11-05 18:03:05.329096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:45.597 [2024-11-05 18:03:05.329106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.597 [2024-11-05 18:03:05.329118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.597 [2024-11-05 18:03:05.329166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.597 [2024-11-05 18:03:05.329175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:45.597 [2024-11-05 18:03:05.329183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.597 [2024-11-05 18:03:05.329463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.597 [2024-11-05 18:03:05.329487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.597 [2024-11-05 18:03:05.329495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:45.597 [2024-11-05 18:03:05.329503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.597 [2024-11-05 18:03:05.329510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.597 [2024-11-05 18:03:05.329559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.597 [2024-11-05 18:03:05.329568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:45.597 [2024-11-05 18:03:05.329575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.597 [2024-11-05 18:03:05.329582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.597 [2024-11-05 18:03:05.329610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.597 [2024-11-05 18:03:05.329618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:45.597 [2024-11-05 18:03:05.329626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.597 [2024-11-05 18:03:05.329633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.597 [2024-11-05 18:03:05.329668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.597 [2024-11-05 18:03:05.329677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:45.597 [2024-11-05 18:03:05.329685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.597 [2024-11-05 18:03:05.329692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.597 [2024-11-05 18:03:05.329729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:45.597 [2024-11-05 18:03:05.329738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:45.597 [2024-11-05 18:03:05.329746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:45.597 [2024-11-05 18:03:05.329754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.597 [2024-11-05 18:03:05.329866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 29.354 ms, result 0 00:28:45.597 00:28:45.597 00:28:45.597 18:03:05 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:48.139 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:48.139 18:03:07 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:28:48.139 [2024-11-05 18:03:07.699645] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:28:48.139 [2024-11-05 18:03:07.699764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93856 ] 00:28:48.139 [2024-11-05 18:03:07.828985] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:48.139 [2024-11-05 18:03:07.860671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.139 [2024-11-05 18:03:07.879763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.139 [2024-11-05 18:03:07.966899] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.140 [2024-11-05 18:03:07.966964] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.140 [2024-11-05 18:03:08.123749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.123917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:48.140 [2024-11-05 18:03:08.123941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:48.140 [2024-11-05 18:03:08.123950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.124012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.124022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:48.140 [2024-11-05 18:03:08.124031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:48.140 [2024-11-05 18:03:08.124038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.124082] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:48.140 [2024-11-05 18:03:08.124322] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:48.140 [2024-11-05 18:03:08.124336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.124348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:48.140 [2024-11-05 18:03:08.124358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:28:48.140 [2024-11-05 18:03:08.124366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.124622] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:48.140 [2024-11-05 18:03:08.124644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.124658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:48.140 [2024-11-05 18:03:08.124674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:48.140 [2024-11-05 18:03:08.124685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.124733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.124746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:48.140 [2024-11-05 18:03:08.124757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:48.140 [2024-11-05 18:03:08.124765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.125016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.125033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:48.140 [2024-11-05 18:03:08.125042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:28:48.140 [2024-11-05 18:03:08.125049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.125140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.125159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:48.140 [2024-11-05 18:03:08.125172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:48.140 [2024-11-05 18:03:08.125184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.125207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.125215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:48.140 [2024-11-05 18:03:08.125227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:48.140 [2024-11-05 18:03:08.125235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.125254] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:48.140 [2024-11-05 18:03:08.126683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.126711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:48.140 [2024-11-05 18:03:08.126726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:28:48.140 [2024-11-05 18:03:08.126734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.126761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.126779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:48.140 [2024-11-05 18:03:08.126788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:48.140 [2024-11-05 18:03:08.126796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.126815] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:48.140 [2024-11-05 18:03:08.126835] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:48.140 [2024-11-05 18:03:08.126870] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:48.140 [2024-11-05 18:03:08.126886] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:48.140 [2024-11-05 18:03:08.126987] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:48.140 [2024-11-05 18:03:08.126998] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:48.140 [2024-11-05 18:03:08.127009] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:48.140 [2024-11-05 18:03:08.127026] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:48.140 [2024-11-05 18:03:08.127038] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:48.140 [2024-11-05 18:03:08.127047] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:48.140 [2024-11-05 18:03:08.127055] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:48.140 [2024-11-05 18:03:08.127090] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:48.140 [2024-11-05 18:03:08.127099] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:48.140 [2024-11-05 18:03:08.127107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.127120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:48.140 [2024-11-05 18:03:08.127129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:28:48.140 [2024-11-05 18:03:08.127137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.127219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.140 [2024-11-05 18:03:08.127228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:48.140 [2024-11-05 18:03:08.127239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:48.140 [2024-11-05 18:03:08.127247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.140 [2024-11-05 18:03:08.127355] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:48.140 [2024-11-05 18:03:08.127368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:48.140 [2024-11-05 18:03:08.127381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.140 [2024-11-05 18:03:08.127390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.140 [2024-11-05 18:03:08.127398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:48.140 [2024-11-05 18:03:08.127406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:48.140 [2024-11-05 18:03:08.127414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:48.140 [2024-11-05 18:03:08.127422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:48.140 [2024-11-05 18:03:08.127430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:48.140 [2024-11-05 18:03:08.127438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.140 [2024-11-05 18:03:08.127450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:48.140 [2024-11-05 18:03:08.127458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:48.140 [2024-11-05 18:03:08.127466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.140 [2024-11-05 18:03:08.127473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:48.140 [2024-11-05 18:03:08.127481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:48.140 [2024-11-05 18:03:08.127489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.140 [2024-11-05 18:03:08.127497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:48.140 [2024-11-05 18:03:08.127508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:48.140 [2024-11-05 18:03:08.127522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.140 [2024-11-05 18:03:08.127531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:48.140 [2024-11-05 18:03:08.127539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:48.140 [2024-11-05 18:03:08.127547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.140 [2024-11-05 18:03:08.127554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:48.140 [2024-11-05 18:03:08.127562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:48.140 [2024-11-05 18:03:08.127570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.140 [2024-11-05 18:03:08.127577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:48.140 [2024-11-05 18:03:08.127584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:48.140 [2024-11-05 18:03:08.127591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.140 [2024-11-05 18:03:08.127599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:48.140 [2024-11-05 18:03:08.127607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:48.140 [2024-11-05 18:03:08.127614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.140 [2024-11-05 18:03:08.127621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:48.140 [2024-11-05 18:03:08.127629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:48.140 [2024-11-05 18:03:08.127644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.140 [2024-11-05 18:03:08.127652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:48.140 [2024-11-05 18:03:08.127659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:48.140 [2024-11-05 18:03:08.127668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.140 [2024-11-05 18:03:08.127676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:48.140 [2024-11-05 18:03:08.127684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:48.141 [2024-11-05 18:03:08.127691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.141 [2024-11-05 18:03:08.127699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:48.141 [2024-11-05 18:03:08.127706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:48.141 [2024-11-05 18:03:08.127714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.141 [2024-11-05 18:03:08.127722] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:48.141 [2024-11-05 18:03:08.127733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:48.141 [2024-11-05 18:03:08.127741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.141 [2024-11-05 18:03:08.127751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.141 [2024-11-05 18:03:08.127759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:48.141 [2024-11-05 18:03:08.127769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:48.141 [2024-11-05 18:03:08.127779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:48.141 [2024-11-05 18:03:08.127788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:48.141 [2024-11-05 18:03:08.127800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:48.141 [2024-11-05 18:03:08.127808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:48.141 [2024-11-05 18:03:08.127818] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:48.141 [2024-11-05 18:03:08.127828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.141 [2024-11-05 18:03:08.127838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:48.141 [2024-11-05 18:03:08.127847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:48.141 [2024-11-05 18:03:08.127855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:48.141 [2024-11-05 18:03:08.127863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:48.141 [2024-11-05 18:03:08.127872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:48.141 [2024-11-05 18:03:08.127880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:48.141 [2024-11-05 18:03:08.127888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:48.141 [2024-11-05 18:03:08.127896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:48.141 [2024-11-05 18:03:08.127904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:48.141 [2024-11-05 18:03:08.127912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:48.141 [2024-11-05 18:03:08.127922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:48.141 [2024-11-05 18:03:08.127930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:48.141 [2024-11-05 18:03:08.127939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:48.141 [2024-11-05 18:03:08.127948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:48.141 [2024-11-05 18:03:08.127956] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:48.141 [2024-11-05 18:03:08.127965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.141 [2024-11-05 18:03:08.127974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:48.141 [2024-11-05 18:03:08.127982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:48.141 [2024-11-05 18:03:08.127991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:48.141 [2024-11-05 18:03:08.127999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:48.141 [2024-11-05 18:03:08.128007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.141 [2024-11-05 18:03:08.128018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:48.141 [2024-11-05 18:03:08.128030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:28:48.141 [2024-11-05 18:03:08.128038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.133985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.134020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:48.403 [2024-11-05 18:03:08.134029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.877 ms 00:28:48.403 [2024-11-05 18:03:08.134036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.134124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.134133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:48.403 [2024-11-05 18:03:08.134140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:48.403 [2024-11-05 18:03:08.134164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.150557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.150687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:48.403 [2024-11-05 18:03:08.150704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.360 ms 00:28:48.403 [2024-11-05 18:03:08.150712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.150750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.150760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:48.403 [2024-11-05 18:03:08.150779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:48.403 [2024-11-05 18:03:08.150787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.150879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.150893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:48.403 [2024-11-05 18:03:08.150902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:28:48.403 [2024-11-05 18:03:08.150909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.151016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.151026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:48.403 [2024-11-05 18:03:08.151034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:28:48.403 [2024-11-05 18:03:08.151041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.156078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.156115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:48.403 [2024-11-05 18:03:08.156128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.019 ms 00:28:48.403 [2024-11-05 18:03:08.156136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.156238] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:48.403 [2024-11-05 18:03:08.156255] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:48.403 [2024-11-05 18:03:08.156270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.156278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:48.403 [2024-11-05 18:03:08.156290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:48.403 [2024-11-05 18:03:08.156301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.169837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.169958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:48.403 [2024-11-05 18:03:08.169982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.521 ms 00:28:48.403 [2024-11-05 18:03:08.169990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.170115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.170124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:48.403 [2024-11-05 18:03:08.170132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:28:48.403 [2024-11-05 18:03:08.170143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.170186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.170199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:48.403 [2024-11-05 18:03:08.170207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:28:48.403 [2024-11-05 18:03:08.170213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.170502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.170511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:48.403 [2024-11-05 18:03:08.170521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:28:48.403 [2024-11-05 18:03:08.170528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.170542] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:48.403 [2024-11-05 18:03:08.170551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.170560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:48.403 [2024-11-05 18:03:08.170571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:48.403 [2024-11-05 18:03:08.170579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.178501] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:48.403 [2024-11-05 18:03:08.178619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.178629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:48.403 [2024-11-05 18:03:08.178637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.023 ms 00:28:48.403 [2024-11-05 18:03:08.178649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.180999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.181115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:48.403 [2024-11-05 18:03:08.181135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.330 ms 00:28:48.403 [2024-11-05 18:03:08.181145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.181207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.181216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:48.403 [2024-11-05 18:03:08.181225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:48.403 [2024-11-05 18:03:08.181232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.181271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.181279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:48.403 [2024-11-05 18:03:08.181291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:48.403 [2024-11-05 18:03:08.181298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.181329] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:48.403 [2024-11-05 18:03:08.181338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.181345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:48.403 [2024-11-05 18:03:08.181352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:48.403 [2024-11-05 18:03:08.181359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.185444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.185476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:48.403 [2024-11-05 18:03:08.185486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.064 ms 00:28:48.403 [2024-11-05 18:03:08.185493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.403 [2024-11-05 18:03:08.185562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.403 [2024-11-05 18:03:08.185571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:48.403 [2024-11-05 18:03:08.185580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:48.403 [2024-11-05 18:03:08.185590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.404 [2024-11-05 18:03:08.186423] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 62.300 ms, result 0 00:28:49.346  [2024-11-05T18:03:10.272Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-05T18:03:11.211Z] Copying: 59/1024 [MB] (32 MBps) [2024-11-05T18:03:12.596Z] Copying: 89/1024 [MB] (30 MBps) [2024-11-05T18:03:13.538Z] Copying: 124/1024 [MB] (34 MBps) [2024-11-05T18:03:14.480Z] Copying: 150/1024 [MB] (26 MBps) [2024-11-05T18:03:15.422Z] Copying: 185/1024 [MB] (35 MBps) [2024-11-05T18:03:16.362Z] Copying: 211/1024 [MB] (25 MBps) [2024-11-05T18:03:17.307Z] Copying: 236/1024 [MB] (25 MBps) [2024-11-05T18:03:18.246Z] Copying: 259/1024 [MB] (22 MBps) [2024-11-05T18:03:19.632Z] Copying: 280/1024 [MB] (21 MBps) [2024-11-05T18:03:20.203Z] Copying: 303/1024 [MB] (22 MBps) [2024-11-05T18:03:21.583Z] Copying: 327/1024 [MB] (24 MBps) [2024-11-05T18:03:22.540Z] Copying: 343/1024 [MB] (16 MBps) [2024-11-05T18:03:23.482Z] Copying: 369/1024 [MB] (25 MBps) [2024-11-05T18:03:24.423Z] Copying: 392/1024 [MB] (22 MBps) [2024-11-05T18:03:25.364Z] Copying: 413/1024 [MB] (21 MBps) [2024-11-05T18:03:26.306Z] Copying: 431/1024 [MB] (17 MBps) [2024-11-05T18:03:27.245Z] Copying: 452/1024 [MB] (20 MBps) [2024-11-05T18:03:28.631Z] Copying: 469/1024 [MB] (17 MBps) [2024-11-05T18:03:29.202Z] Copying: 486/1024 [MB] (16 MBps) [2024-11-05T18:03:30.589Z] Copying: 500/1024 [MB] (14 MBps) [2024-11-05T18:03:31.532Z] Copying: 518/1024 [MB] (18 MBps) [2024-11-05T18:03:32.475Z] Copying: 542/1024 [MB] (23 MBps) [2024-11-05T18:03:33.421Z] Copying: 557/1024 [MB] (15 MBps) [2024-11-05T18:03:34.364Z] Copying: 574/1024 [MB] (17 MBps) [2024-11-05T18:03:35.306Z] Copying: 588/1024 [MB] (13 MBps) [2024-11-05T18:03:36.250Z] Copying: 605/1024 [MB] (17 MBps) [2024-11-05T18:03:37.635Z] Copying: 625/1024 [MB] (19 MBps) [2024-11-05T18:03:38.208Z] Copying: 637/1024 [MB] (11 MBps) [2024-11-05T18:03:39.595Z] Copying: 648/1024 [MB] (11 MBps) [2024-11-05T18:03:40.540Z] Copying: 661/1024 [MB] (12 MBps) [2024-11-05T18:03:41.481Z] Copying: 672/1024 [MB] (11 MBps) [2024-11-05T18:03:42.425Z] Copying: 683/1024 [MB] (10 MBps) [2024-11-05T18:03:43.369Z] Copying: 694/1024 [MB] (10 MBps) [2024-11-05T18:03:44.312Z] Copying: 705/1024 [MB] (10 MBps) [2024-11-05T18:03:45.257Z] Copying: 716/1024 [MB] (10 MBps) [2024-11-05T18:03:46.203Z] Copying: 727/1024 [MB] (11 MBps) [2024-11-05T18:03:47.597Z] Copying: 737/1024 [MB] (10 MBps) [2024-11-05T18:03:48.542Z] Copying: 748/1024 [MB] (10 MBps) [2024-11-05T18:03:49.487Z] Copying: 776656/1048576 [kB] (10184 kBps) [2024-11-05T18:03:50.432Z] Copying: 786624/1048576 [kB] (9968 kBps) [2024-11-05T18:03:51.376Z] Copying: 796656/1048576 [kB] (10032 kBps) [2024-11-05T18:03:52.320Z] Copying: 806592/1048576 [kB] (9936 kBps) [2024-11-05T18:03:53.264Z] Copying: 816288/1048576 [kB] (9696 kBps) [2024-11-05T18:03:54.208Z] Copying: 825760/1048576 [kB] (9472 kBps) [2024-11-05T18:03:55.596Z] Copying: 835896/1048576 [kB] (10136 kBps) [2024-11-05T18:03:56.541Z] Copying: 846128/1048576 [kB] (10232 kBps) [2024-11-05T18:03:57.485Z] Copying: 856088/1048576 [kB] (9960 kBps) [2024-11-05T18:03:58.430Z] Copying: 865920/1048576 [kB] (9832 kBps) [2024-11-05T18:03:59.375Z] Copying: 875992/1048576 [kB] (10072 kBps) [2024-11-05T18:04:00.320Z] Copying: 886008/1048576 [kB] (10016 kBps) [2024-11-05T18:04:01.262Z] Copying: 889/1024 [MB] (24 MBps) [2024-11-05T18:04:02.242Z] Copying: 904/1024 [MB] (14 MBps) [2024-11-05T18:04:03.629Z] Copying: 924/1024 [MB] (19 MBps) [2024-11-05T18:04:04.202Z] Copying: 944/1024 [MB] (20 MBps) [2024-11-05T18:04:05.579Z] Copying: 955/1024 [MB] (10 MBps) [2024-11-05T18:04:06.310Z] Copying: 1007/1024 [MB] (52 MBps) [2024-11-05T18:04:06.883Z] Copying: 1023/1024 [MB] (16 MBps) [2024-11-05T18:04:06.883Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-05 18:04:06.643619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.892 [2024-11-05 18:04:06.643848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:46.892 [2024-11-05 18:04:06.643930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:46.892 [2024-11-05 18:04:06.643957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.892 [2024-11-05 18:04:06.645556] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:46.892 [2024-11-05 18:04:06.649826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.892 [2024-11-05 18:04:06.650004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:46.892 [2024-11-05 18:04:06.650143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.097 ms 00:29:46.892 [2024-11-05 18:04:06.650175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.892 [2024-11-05 18:04:06.660626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.892 [2024-11-05 18:04:06.660801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:46.892 [2024-11-05 18:04:06.660835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.432 ms 00:29:46.892 [2024-11-05 18:04:06.660844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.892 [2024-11-05 18:04:06.660881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.893 [2024-11-05 18:04:06.660892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:46.893 [2024-11-05 18:04:06.660902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:46.893 [2024-11-05 18:04:06.660911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.893 [2024-11-05 18:04:06.660972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.893 [2024-11-05 18:04:06.660983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:46.893 [2024-11-05 18:04:06.660996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:46.893 [2024-11-05 18:04:06.661005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.893 [2024-11-05 18:04:06.661018] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:46.893 [2024-11-05 18:04:06.661031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:29:46.893 [2024-11-05 18:04:06.661047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:46.893 [2024-11-05 18:04:06.661786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:46.894 [2024-11-05 18:04:06.661955] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:46.894 [2024-11-05 18:04:06.661968] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b68b3d0c-2388-4054-9d20-b1208a1cd419 00:29:46.894 [2024-11-05 18:04:06.661977] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:29:46.894 [2024-11-05 18:04:06.661985] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129056 00:29:46.894 [2024-11-05 18:04:06.661992] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:29:46.894 [2024-11-05 18:04:06.662000] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:29:46.894 [2024-11-05 18:04:06.662008] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:46.894 [2024-11-05 18:04:06.662019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:46.894 [2024-11-05 18:04:06.662028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:46.894 [2024-11-05 18:04:06.662034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:46.894 [2024-11-05 18:04:06.662041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:46.894 [2024-11-05 18:04:06.662048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.894 [2024-11-05 18:04:06.662061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:46.894 [2024-11-05 18:04:06.662091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:29:46.894 [2024-11-05 18:04:06.662099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.664604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.894 [2024-11-05 18:04:06.664642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:46.894 [2024-11-05 18:04:06.664664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.487 ms 00:29:46.894 [2024-11-05 18:04:06.664676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.664801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.894 [2024-11-05 18:04:06.664812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:46.894 [2024-11-05 18:04:06.664822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:46.894 [2024-11-05 18:04:06.664831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.672865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.672925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:46.894 [2024-11-05 18:04:06.672943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.672951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.673024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.673033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:46.894 [2024-11-05 18:04:06.673043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.673050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.673127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.673138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:46.894 [2024-11-05 18:04:06.673151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.673159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.673176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.673185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:46.894 [2024-11-05 18:04:06.673193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.673201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.687167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.687375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:46.894 [2024-11-05 18:04:06.687394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.687403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.698295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.698464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:46.894 [2024-11-05 18:04:06.698484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.698493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.698547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.698557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:46.894 [2024-11-05 18:04:06.698566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.698582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.698618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.698628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:46.894 [2024-11-05 18:04:06.698635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.698644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.698700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.698711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:46.894 [2024-11-05 18:04:06.698719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.698727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.698789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.698804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:46.894 [2024-11-05 18:04:06.698817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.698829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.698884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.698896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:46.894 [2024-11-05 18:04:06.698915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.698927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.698994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.894 [2024-11-05 18:04:06.699009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:46.894 [2024-11-05 18:04:06.699019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.894 [2024-11-05 18:04:06.699028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.894 [2024-11-05 18:04:06.699195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 59.026 ms, result 0 00:29:49.442 00:29:49.442 00:29:49.442 18:04:08 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:29:49.442 [2024-11-05 18:04:09.070998] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:29:49.442 [2024-11-05 18:04:09.071177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94469 ] 00:29:49.442 [2024-11-05 18:04:09.203962] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:49.442 [2024-11-05 18:04:09.234871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.442 [2024-11-05 18:04:09.275788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.442 [2024-11-05 18:04:09.401457] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:49.442 [2024-11-05 18:04:09.401575] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:49.706 [2024-11-05 18:04:09.564726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.706 [2024-11-05 18:04:09.564956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:49.706 [2024-11-05 18:04:09.564981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:49.706 [2024-11-05 18:04:09.564991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.706 [2024-11-05 18:04:09.565096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.706 [2024-11-05 18:04:09.565109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:49.706 [2024-11-05 18:04:09.565119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:49.706 [2024-11-05 18:04:09.565127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.706 [2024-11-05 18:04:09.565155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:49.706 [2024-11-05 18:04:09.565409] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:49.706 [2024-11-05 18:04:09.565426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.706 [2024-11-05 18:04:09.565436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:49.706 [2024-11-05 18:04:09.565451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:29:49.706 [2024-11-05 18:04:09.565460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.706 [2024-11-05 18:04:09.565777] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:49.706 [2024-11-05 18:04:09.565805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.706 [2024-11-05 18:04:09.565815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:49.706 [2024-11-05 18:04:09.565825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:49.706 [2024-11-05 18:04:09.565838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.706 [2024-11-05 18:04:09.565894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.706 [2024-11-05 18:04:09.565905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:49.706 [2024-11-05 18:04:09.565913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:49.706 [2024-11-05 18:04:09.565921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.706 [2024-11-05 18:04:09.566192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.706 [2024-11-05 18:04:09.566208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:49.706 [2024-11-05 18:04:09.566217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:29:49.706 [2024-11-05 18:04:09.566228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.706 [2024-11-05 18:04:09.566313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.706 [2024-11-05 18:04:09.566326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:49.706 [2024-11-05 18:04:09.566335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:49.706 [2024-11-05 18:04:09.566347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.706 [2024-11-05 18:04:09.566375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.706 [2024-11-05 18:04:09.566385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:49.706 [2024-11-05 18:04:09.566394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:49.706 [2024-11-05 18:04:09.566403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.706 [2024-11-05 18:04:09.566427] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:49.706 [2024-11-05 18:04:09.568558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.706 [2024-11-05 18:04:09.568599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:49.706 [2024-11-05 18:04:09.568619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:29:49.706 [2024-11-05 18:04:09.568627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.706 [2024-11-05 18:04:09.568661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.706 [2024-11-05 18:04:09.568669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:49.707 [2024-11-05 18:04:09.568678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:49.707 [2024-11-05 18:04:09.568685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.707 [2024-11-05 18:04:09.568744] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:49.707 [2024-11-05 18:04:09.568771] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:49.707 [2024-11-05 18:04:09.568806] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:49.707 [2024-11-05 18:04:09.568823] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:49.707 [2024-11-05 18:04:09.568928] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:49.707 [2024-11-05 18:04:09.568939] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:49.707 [2024-11-05 18:04:09.568950] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:49.707 [2024-11-05 18:04:09.568965] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:49.707 [2024-11-05 18:04:09.568981] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:49.707 [2024-11-05 18:04:09.568989] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:49.707 [2024-11-05 18:04:09.569000] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:49.707 [2024-11-05 18:04:09.569007] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:49.707 [2024-11-05 18:04:09.569015] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:49.707 [2024-11-05 18:04:09.569023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.707 [2024-11-05 18:04:09.569030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:49.707 [2024-11-05 18:04:09.569038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:29:49.707 [2024-11-05 18:04:09.569045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.707 [2024-11-05 18:04:09.569168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.707 [2024-11-05 18:04:09.569179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:49.707 [2024-11-05 18:04:09.569192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:49.707 [2024-11-05 18:04:09.569199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.707 [2024-11-05 18:04:09.569303] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:49.707 [2024-11-05 18:04:09.569315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:49.707 [2024-11-05 18:04:09.569325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:49.707 [2024-11-05 18:04:09.569334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:49.707 [2024-11-05 18:04:09.569354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:49.707 [2024-11-05 18:04:09.569372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:49.707 [2024-11-05 18:04:09.569381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:49.707 [2024-11-05 18:04:09.569404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:49.707 [2024-11-05 18:04:09.569413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:49.707 [2024-11-05 18:04:09.569421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:49.707 [2024-11-05 18:04:09.569429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:49.707 [2024-11-05 18:04:09.569437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:49.707 [2024-11-05 18:04:09.569445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:49.707 [2024-11-05 18:04:09.569461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:49.707 [2024-11-05 18:04:09.569469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:49.707 [2024-11-05 18:04:09.569490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.707 [2024-11-05 18:04:09.569506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:49.707 [2024-11-05 18:04:09.569514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.707 [2024-11-05 18:04:09.569529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:49.707 [2024-11-05 18:04:09.569537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.707 [2024-11-05 18:04:09.569552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:49.707 [2024-11-05 18:04:09.569560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.707 [2024-11-05 18:04:09.569575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:49.707 [2024-11-05 18:04:09.569583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:49.707 [2024-11-05 18:04:09.569598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:49.707 [2024-11-05 18:04:09.569605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:49.707 [2024-11-05 18:04:09.569614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:49.707 [2024-11-05 18:04:09.569621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:49.707 [2024-11-05 18:04:09.569627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:49.707 [2024-11-05 18:04:09.569634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:49.707 [2024-11-05 18:04:09.569647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:49.707 [2024-11-05 18:04:09.569655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569663] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:49.707 [2024-11-05 18:04:09.569672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:49.707 [2024-11-05 18:04:09.569680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:49.707 [2024-11-05 18:04:09.569690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.707 [2024-11-05 18:04:09.569698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:49.707 [2024-11-05 18:04:09.569706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:49.707 [2024-11-05 18:04:09.569713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:49.707 [2024-11-05 18:04:09.569720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:49.707 [2024-11-05 18:04:09.569727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:49.707 [2024-11-05 18:04:09.569735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:49.707 [2024-11-05 18:04:09.569745] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:49.707 [2024-11-05 18:04:09.569757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.707 [2024-11-05 18:04:09.569766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:49.707 [2024-11-05 18:04:09.569774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:49.707 [2024-11-05 18:04:09.569782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:49.707 [2024-11-05 18:04:09.569789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:49.707 [2024-11-05 18:04:09.569796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:49.707 [2024-11-05 18:04:09.569804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:49.707 [2024-11-05 18:04:09.569811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:49.707 [2024-11-05 18:04:09.569818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:49.707 [2024-11-05 18:04:09.569824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:49.707 [2024-11-05 18:04:09.569831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:49.707 [2024-11-05 18:04:09.569838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:49.707 [2024-11-05 18:04:09.569846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:49.707 [2024-11-05 18:04:09.569853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:49.707 [2024-11-05 18:04:09.569863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:49.707 [2024-11-05 18:04:09.569870] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:49.707 [2024-11-05 18:04:09.569879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.707 [2024-11-05 18:04:09.569892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:49.707 [2024-11-05 18:04:09.569899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:49.707 [2024-11-05 18:04:09.569906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:49.707 [2024-11-05 18:04:09.569913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:49.707 [2024-11-05 18:04:09.569928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.569936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:49.708 [2024-11-05 18:04:09.569943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:29:49.708 [2024-11-05 18:04:09.569951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.579947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.580121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:49.708 [2024-11-05 18:04:09.580185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.953 ms 00:29:49.708 [2024-11-05 18:04:09.580209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.580319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.580341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:49.708 [2024-11-05 18:04:09.580361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:49.708 [2024-11-05 18:04:09.580380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.604831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.605166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:49.708 [2024-11-05 18:04:09.605337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.380 ms 00:29:49.708 [2024-11-05 18:04:09.605418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.605531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.605733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:49.708 [2024-11-05 18:04:09.605789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:49.708 [2024-11-05 18:04:09.605883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.606161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.606367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:49.708 [2024-11-05 18:04:09.606479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:29:49.708 [2024-11-05 18:04:09.606530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.606888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.607042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:49.708 [2024-11-05 18:04:09.607189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:29:49.708 [2024-11-05 18:04:09.607252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.616136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.616287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:49.708 [2024-11-05 18:04:09.616348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.685 ms 00:29:49.708 [2024-11-05 18:04:09.616371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.616518] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:49.708 [2024-11-05 18:04:09.616559] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:49.708 [2024-11-05 18:04:09.616591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.616679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:49.708 [2024-11-05 18:04:09.616705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:29:49.708 [2024-11-05 18:04:09.616735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.629039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.629184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:49.708 [2024-11-05 18:04:09.629242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.273 ms 00:29:49.708 [2024-11-05 18:04:09.629273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.629414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.629437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:49.708 [2024-11-05 18:04:09.629502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:29:49.708 [2024-11-05 18:04:09.629536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.629603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.629683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:49.708 [2024-11-05 18:04:09.629709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:49.708 [2024-11-05 18:04:09.629886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.630288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.630310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:49.708 [2024-11-05 18:04:09.630320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:29:49.708 [2024-11-05 18:04:09.630337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.630354] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:49.708 [2024-11-05 18:04:09.630364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.630373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:49.708 [2024-11-05 18:04:09.630386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:49.708 [2024-11-05 18:04:09.630399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.639788] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:49.708 [2024-11-05 18:04:09.639950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.639961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:49.708 [2024-11-05 18:04:09.639970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.532 ms 00:29:49.708 [2024-11-05 18:04:09.639979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.642550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.642583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:49.708 [2024-11-05 18:04:09.642593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.450 ms 00:29:49.708 [2024-11-05 18:04:09.642601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.642681] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:29:49.708 [2024-11-05 18:04:09.643315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.643334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:49.708 [2024-11-05 18:04:09.643344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:29:49.708 [2024-11-05 18:04:09.643360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.643386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.643396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:49.708 [2024-11-05 18:04:09.643404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:49.708 [2024-11-05 18:04:09.643411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.643447] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:49.708 [2024-11-05 18:04:09.643456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.643462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:49.708 [2024-11-05 18:04:09.643470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:49.708 [2024-11-05 18:04:09.643477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.649566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.649621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:49.708 [2024-11-05 18:04:09.649634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.068 ms 00:29:49.708 [2024-11-05 18:04:09.649642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.649729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.708 [2024-11-05 18:04:09.649739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:49.708 [2024-11-05 18:04:09.649752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:49.708 [2024-11-05 18:04:09.649760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.708 [2024-11-05 18:04:09.654655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 87.773 ms, result 0 00:29:51.093  [2024-11-05T18:04:12.027Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-05T18:04:12.967Z] Copying: 36/1024 [MB] (16 MBps) [2024-11-05T18:04:13.908Z] Copying: 56/1024 [MB] (20 MBps) [2024-11-05T18:04:14.851Z] Copying: 66/1024 [MB] (10 MBps) [2024-11-05T18:04:16.240Z] Copying: 81/1024 [MB] (15 MBps) [2024-11-05T18:04:17.178Z] Copying: 96/1024 [MB] (14 MBps) [2024-11-05T18:04:18.117Z] Copying: 109/1024 [MB] (13 MBps) [2024-11-05T18:04:19.061Z] Copying: 129/1024 [MB] (20 MBps) [2024-11-05T18:04:20.005Z] Copying: 143/1024 [MB] (13 MBps) [2024-11-05T18:04:20.976Z] Copying: 154/1024 [MB] (11 MBps) [2024-11-05T18:04:21.920Z] Copying: 165/1024 [MB] (10 MBps) [2024-11-05T18:04:22.864Z] Copying: 177/1024 [MB] (11 MBps) [2024-11-05T18:04:24.250Z] Copying: 196/1024 [MB] (19 MBps) [2024-11-05T18:04:25.190Z] Copying: 219/1024 [MB] (22 MBps) [2024-11-05T18:04:26.129Z] Copying: 237/1024 [MB] (18 MBps) [2024-11-05T18:04:27.071Z] Copying: 254/1024 [MB] (17 MBps) [2024-11-05T18:04:28.015Z] Copying: 273/1024 [MB] (18 MBps) [2024-11-05T18:04:28.958Z] Copying: 287/1024 [MB] (14 MBps) [2024-11-05T18:04:29.901Z] Copying: 308/1024 [MB] (20 MBps) [2024-11-05T18:04:30.844Z] Copying: 324/1024 [MB] (16 MBps) [2024-11-05T18:04:32.231Z] Copying: 347/1024 [MB] (22 MBps) [2024-11-05T18:04:33.173Z] Copying: 359/1024 [MB] (12 MBps) [2024-11-05T18:04:34.115Z] Copying: 371/1024 [MB] (12 MBps) [2024-11-05T18:04:35.057Z] Copying: 394/1024 [MB] (22 MBps) [2024-11-05T18:04:36.002Z] Copying: 413/1024 [MB] (18 MBps) [2024-11-05T18:04:36.945Z] Copying: 433/1024 [MB] (20 MBps) [2024-11-05T18:04:37.921Z] Copying: 457/1024 [MB] (24 MBps) [2024-11-05T18:04:38.861Z] Copying: 482/1024 [MB] (25 MBps) [2024-11-05T18:04:40.246Z] Copying: 510/1024 [MB] (27 MBps) [2024-11-05T18:04:41.190Z] Copying: 533/1024 [MB] (23 MBps) [2024-11-05T18:04:42.134Z] Copying: 558/1024 [MB] (24 MBps) [2024-11-05T18:04:43.074Z] Copying: 576/1024 [MB] (18 MBps) [2024-11-05T18:04:44.048Z] Copying: 599/1024 [MB] (22 MBps) [2024-11-05T18:04:44.987Z] Copying: 621/1024 [MB] (21 MBps) [2024-11-05T18:04:45.930Z] Copying: 646/1024 [MB] (25 MBps) [2024-11-05T18:04:46.873Z] Copying: 671/1024 [MB] (24 MBps) [2024-11-05T18:04:48.256Z] Copying: 693/1024 [MB] (22 MBps) [2024-11-05T18:04:49.199Z] Copying: 714/1024 [MB] (20 MBps) [2024-11-05T18:04:50.143Z] Copying: 732/1024 [MB] (18 MBps) [2024-11-05T18:04:51.088Z] Copying: 753/1024 [MB] (20 MBps) [2024-11-05T18:04:52.032Z] Copying: 768/1024 [MB] (14 MBps) [2024-11-05T18:04:52.973Z] Copying: 782/1024 [MB] (14 MBps) [2024-11-05T18:04:53.916Z] Copying: 796/1024 [MB] (13 MBps) [2024-11-05T18:04:54.859Z] Copying: 809/1024 [MB] (13 MBps) [2024-11-05T18:04:56.246Z] Copying: 823/1024 [MB] (13 MBps) [2024-11-05T18:04:57.190Z] Copying: 837/1024 [MB] (13 MBps) [2024-11-05T18:04:58.131Z] Copying: 856/1024 [MB] (19 MBps) [2024-11-05T18:04:59.110Z] Copying: 870/1024 [MB] (13 MBps) [2024-11-05T18:05:00.053Z] Copying: 883/1024 [MB] (13 MBps) [2024-11-05T18:05:00.998Z] Copying: 898/1024 [MB] (14 MBps) [2024-11-05T18:05:01.942Z] Copying: 912/1024 [MB] (13 MBps) [2024-11-05T18:05:02.884Z] Copying: 926/1024 [MB] (13 MBps) [2024-11-05T18:05:04.258Z] Copying: 939/1024 [MB] (13 MBps) [2024-11-05T18:05:05.190Z] Copying: 984/1024 [MB] (44 MBps) [2024-11-05T18:05:05.190Z] Copying: 1020/1024 [MB] (36 MBps) [2024-11-05T18:05:05.449Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-05 18:05:05.219108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.458 [2024-11-05 18:05:05.219172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:45.458 [2024-11-05 18:05:05.219189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:45.458 [2024-11-05 18:05:05.219200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.458 [2024-11-05 18:05:05.219226] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:45.458 [2024-11-05 18:05:05.219680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.458 [2024-11-05 18:05:05.219702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:45.458 [2024-11-05 18:05:05.219711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:30:45.458 [2024-11-05 18:05:05.219719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.458 [2024-11-05 18:05:05.219935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.458 [2024-11-05 18:05:05.219944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:45.458 [2024-11-05 18:05:05.219953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:30:45.458 [2024-11-05 18:05:05.219962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.458 [2024-11-05 18:05:05.219988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.458 [2024-11-05 18:05:05.220000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:45.458 [2024-11-05 18:05:05.220009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:45.458 [2024-11-05 18:05:05.220017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.458 [2024-11-05 18:05:05.220081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.458 [2024-11-05 18:05:05.220093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:45.458 [2024-11-05 18:05:05.220102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:45.458 [2024-11-05 18:05:05.220111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.458 [2024-11-05 18:05:05.220125] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:45.458 [2024-11-05 18:05:05.220138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:30:45.458 [2024-11-05 18:05:05.220152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:45.458 [2024-11-05 18:05:05.220272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:45.459 [2024-11-05 18:05:05.220932] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:45.459 [2024-11-05 18:05:05.220940] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b68b3d0c-2388-4054-9d20-b1208a1cd419 00:30:45.459 [2024-11-05 18:05:05.220947] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:30:45.459 [2024-11-05 18:05:05.220954] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2080 00:30:45.459 [2024-11-05 18:05:05.220960] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2048 00:30:45.459 [2024-11-05 18:05:05.220968] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0156 00:30:45.459 [2024-11-05 18:05:05.220976] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:45.460 [2024-11-05 18:05:05.220983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:45.460 [2024-11-05 18:05:05.220990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:45.460 [2024-11-05 18:05:05.220997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:45.460 [2024-11-05 18:05:05.221004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:45.460 [2024-11-05 18:05:05.221010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.460 [2024-11-05 18:05:05.221018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:45.460 [2024-11-05 18:05:05.221025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:30:45.460 [2024-11-05 18:05:05.221032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.222382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.460 [2024-11-05 18:05:05.222503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:45.460 [2024-11-05 18:05:05.222523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:30:45.460 [2024-11-05 18:05:05.222530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.222608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.460 [2024-11-05 18:05:05.222616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:45.460 [2024-11-05 18:05:05.222625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:45.460 [2024-11-05 18:05:05.222631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.227161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.227187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:45.460 [2024-11-05 18:05:05.227196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.227203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.227254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.227262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:45.460 [2024-11-05 18:05:05.227270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.227276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.227305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.227316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:45.460 [2024-11-05 18:05:05.227323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.227330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.227345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.227352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:45.460 [2024-11-05 18:05:05.227360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.227367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.236573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.236614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:45.460 [2024-11-05 18:05:05.236624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.236632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.244129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.244164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:45.460 [2024-11-05 18:05:05.244173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.244181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.244222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.244232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:45.460 [2024-11-05 18:05:05.244244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.244256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.244281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.244289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:45.460 [2024-11-05 18:05:05.244297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.244305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.244350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.244362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:45.460 [2024-11-05 18:05:05.244370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.244382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.244412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.244421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:45.460 [2024-11-05 18:05:05.244428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.244435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.244467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.244476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:45.460 [2024-11-05 18:05:05.244483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.244493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.244528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:45.460 [2024-11-05 18:05:05.244537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:45.460 [2024-11-05 18:05:05.244545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:45.460 [2024-11-05 18:05:05.244556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.460 [2024-11-05 18:05:05.244665] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 25.555 ms, result 0 00:30:45.460 00:30:45.460 00:30:45.460 18:05:05 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:47.990 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:47.990 Process with pid 92482 is not found 00:30:47.990 Remove shared memory files 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92482 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # '[' -z 92482 ']' 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # kill -0 92482 00:30:47.990 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (92482) - No such process 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- common/autotest_common.sh@979 -- # echo 'Process with pid 92482 is not found' 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_band_md /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_l2p_l1 /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_l2p_l2 /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_l2p_l2_ctx /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_nvc_md /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_p2l_pool /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_sb /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_sb_shm /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_trim_bitmap /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_trim_log /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_trim_md /dev/hugepages/ftl_b68b3d0c-2388-4054-9d20-b1208a1cd419_vmap 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:30:47.990 ************************************ 00:30:47.990 END TEST ftl_restore_fast 00:30:47.990 ************************************ 00:30:47.990 00:30:47.990 real 4m12.702s 00:30:47.990 user 4m1.982s 00:30:47.990 sys 0m11.265s 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:47.990 18:05:07 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:47.991 Process with pid 84945 is not found 00:30:47.991 18:05:07 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:47.991 18:05:07 ftl -- ftl/ftl.sh@14 -- # killprocess 84945 00:30:47.991 18:05:07 ftl -- common/autotest_common.sh@952 -- # '[' -z 84945 ']' 00:30:47.991 18:05:07 ftl -- common/autotest_common.sh@956 -- # kill -0 84945 00:30:47.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (84945) - No such process 00:30:47.991 18:05:07 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 84945 is not found' 00:30:47.991 18:05:07 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:47.991 18:05:07 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95083 00:30:47.991 18:05:07 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:47.991 18:05:07 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95083 00:30:47.991 18:05:07 ftl -- common/autotest_common.sh@833 -- # '[' -z 95083 ']' 00:30:47.991 18:05:07 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.991 18:05:07 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:47.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:47.991 18:05:07 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.991 18:05:07 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:47.991 18:05:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:47.991 [2024-11-05 18:05:07.751624] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 24.11.0-rc1 initialization... 00:30:47.991 [2024-11-05 18:05:07.751738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95083 ] 00:30:47.991 [2024-11-05 18:05:07.879672] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:47.991 [2024-11-05 18:05:07.910082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.991 [2024-11-05 18:05:07.928296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.925 18:05:08 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:48.925 18:05:08 ftl -- common/autotest_common.sh@866 -- # return 0 00:30:48.925 18:05:08 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:48.925 nvme0n1 00:30:48.925 18:05:08 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:48.925 18:05:08 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:48.925 18:05:08 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:49.184 18:05:09 ftl -- ftl/common.sh@28 -- # stores=6dfa23b5-97ae-4fcb-a95e-855b83d9a5ea 00:30:49.184 18:05:09 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:49.184 18:05:09 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6dfa23b5-97ae-4fcb-a95e-855b83d9a5ea 00:30:49.445 18:05:09 ftl -- ftl/ftl.sh@23 -- # killprocess 95083 00:30:49.445 18:05:09 ftl -- common/autotest_common.sh@952 -- # '[' -z 95083 ']' 00:30:49.445 18:05:09 ftl -- common/autotest_common.sh@956 -- # kill -0 95083 00:30:49.445 18:05:09 ftl -- common/autotest_common.sh@957 -- # uname 00:30:49.445 18:05:09 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:49.445 18:05:09 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95083 00:30:49.445 killing process with pid 95083 00:30:49.445 18:05:09 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:49.445 18:05:09 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:49.445 18:05:09 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95083' 00:30:49.445 18:05:09 ftl -- common/autotest_common.sh@971 -- # kill 95083 00:30:49.445 18:05:09 ftl -- common/autotest_common.sh@976 -- # wait 95083 00:30:49.706 18:05:09 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:49.965 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:49.965 Waiting for block devices as requested 00:30:49.965 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:49.965 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:50.225 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:50.225 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:55.515 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:55.515 18:05:15 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:55.515 Remove shared memory files 00:30:55.515 18:05:15 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:55.515 18:05:15 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:55.515 18:05:15 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:55.515 18:05:15 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:55.515 18:05:15 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:55.515 18:05:15 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:55.515 00:30:55.515 real 15m15.059s 00:30:55.515 user 17m38.197s 00:30:55.515 sys 1m19.032s 00:30:55.515 ************************************ 00:30:55.515 END TEST ftl 00:30:55.515 ************************************ 00:30:55.515 18:05:15 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:55.515 18:05:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:55.515 18:05:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:30:55.515 18:05:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:55.515 18:05:15 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:30:55.515 18:05:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:55.515 18:05:15 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:30:55.515 18:05:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:55.515 18:05:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:55.515 18:05:15 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:30:55.515 18:05:15 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:30:55.515 18:05:15 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:30:55.515 18:05:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:55.515 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:30:55.515 18:05:15 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:30:55.515 18:05:15 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:30:55.515 18:05:15 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:30:55.515 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:30:56.901 INFO: APP EXITING 00:30:56.901 INFO: killing all VMs 00:30:56.901 INFO: killing vhost app 00:30:56.901 INFO: EXIT DONE 00:30:56.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:57.473 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:57.473 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:57.473 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:57.473 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:57.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:57.995 Cleaning 00:30:57.995 Removing: /var/run/dpdk/spdk0/config 00:30:57.995 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:57.995 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:57.995 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:57.995 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:57.995 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:57.995 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:57.995 Removing: /var/run/dpdk/spdk0 00:30:57.995 Removing: /var/run/dpdk/spdk_pid70375 00:30:57.995 Removing: /var/run/dpdk/spdk_pid70539 00:30:57.995 Removing: /var/run/dpdk/spdk_pid70740 00:30:57.995 Removing: /var/run/dpdk/spdk_pid70822 00:30:57.995 Removing: /var/run/dpdk/spdk_pid70851 00:30:57.995 Removing: /var/run/dpdk/spdk_pid70962 00:30:57.995 Removing: /var/run/dpdk/spdk_pid70980 00:30:57.995 Removing: /var/run/dpdk/spdk_pid71157 00:30:57.995 Removing: /var/run/dpdk/spdk_pid71231 00:30:57.995 Removing: /var/run/dpdk/spdk_pid71310 00:30:57.995 Removing: /var/run/dpdk/spdk_pid71410 00:30:57.995 Removing: /var/run/dpdk/spdk_pid71491 00:30:57.995 Removing: /var/run/dpdk/spdk_pid71525 00:30:57.995 Removing: /var/run/dpdk/spdk_pid71561 00:30:58.255 Removing: /var/run/dpdk/spdk_pid71632 00:30:58.255 Removing: /var/run/dpdk/spdk_pid71738 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72163 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72210 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72257 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72273 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72331 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72347 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72405 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72421 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72474 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72492 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72534 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72552 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72679 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72710 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72799 00:30:58.255 Removing: /var/run/dpdk/spdk_pid72960 00:30:58.255 Removing: /var/run/dpdk/spdk_pid73022 00:30:58.255 Removing: /var/run/dpdk/spdk_pid73053 00:30:58.255 Removing: /var/run/dpdk/spdk_pid73486 00:30:58.255 Removing: /var/run/dpdk/spdk_pid73575 00:30:58.255 Removing: /var/run/dpdk/spdk_pid73681 00:30:58.255 Removing: /var/run/dpdk/spdk_pid73724 00:30:58.255 Removing: /var/run/dpdk/spdk_pid73744 00:30:58.255 Removing: /var/run/dpdk/spdk_pid73828 00:30:58.255 Removing: /var/run/dpdk/spdk_pid74432 00:30:58.255 Removing: /var/run/dpdk/spdk_pid74463 00:30:58.255 Removing: /var/run/dpdk/spdk_pid74917 00:30:58.255 Removing: /var/run/dpdk/spdk_pid75009 00:30:58.255 Removing: /var/run/dpdk/spdk_pid75113 00:30:58.255 Removing: /var/run/dpdk/spdk_pid75155 00:30:58.255 Removing: /var/run/dpdk/spdk_pid75175 00:30:58.255 Removing: /var/run/dpdk/spdk_pid75195 00:30:58.255 Removing: /var/run/dpdk/spdk_pid77019 00:30:58.255 Removing: /var/run/dpdk/spdk_pid77134 00:30:58.255 Removing: /var/run/dpdk/spdk_pid77144 00:30:58.255 Removing: /var/run/dpdk/spdk_pid77161 00:30:58.255 Removing: /var/run/dpdk/spdk_pid77201 00:30:58.255 Removing: /var/run/dpdk/spdk_pid77205 00:30:58.256 Removing: /var/run/dpdk/spdk_pid77217 00:30:58.256 Removing: /var/run/dpdk/spdk_pid77257 00:30:58.256 Removing: /var/run/dpdk/spdk_pid77261 00:30:58.256 Removing: /var/run/dpdk/spdk_pid77273 00:30:58.256 Removing: /var/run/dpdk/spdk_pid77312 00:30:58.256 Removing: /var/run/dpdk/spdk_pid77316 00:30:58.256 Removing: /var/run/dpdk/spdk_pid77328 00:30:58.256 Removing: /var/run/dpdk/spdk_pid78689 00:30:58.256 Removing: /var/run/dpdk/spdk_pid78775 00:30:58.256 Removing: /var/run/dpdk/spdk_pid80163 00:30:58.256 Removing: /var/run/dpdk/spdk_pid81530 00:30:58.256 Removing: /var/run/dpdk/spdk_pid81601 00:30:58.256 Removing: /var/run/dpdk/spdk_pid81656 00:30:58.256 Removing: /var/run/dpdk/spdk_pid81710 00:30:58.256 Removing: /var/run/dpdk/spdk_pid81789 00:30:58.256 Removing: /var/run/dpdk/spdk_pid81857 00:30:58.256 Removing: /var/run/dpdk/spdk_pid81994 00:30:58.256 Removing: /var/run/dpdk/spdk_pid82336 00:30:58.256 Removing: /var/run/dpdk/spdk_pid82367 00:30:58.256 Removing: /var/run/dpdk/spdk_pid82808 00:30:58.256 Removing: /var/run/dpdk/spdk_pid82992 00:30:58.256 Removing: /var/run/dpdk/spdk_pid83082 00:30:58.256 Removing: /var/run/dpdk/spdk_pid83180 00:30:58.256 Removing: /var/run/dpdk/spdk_pid83222 00:30:58.256 Removing: /var/run/dpdk/spdk_pid83248 00:30:58.256 Removing: /var/run/dpdk/spdk_pid83537 00:30:58.256 Removing: /var/run/dpdk/spdk_pid83575 00:30:58.256 Removing: /var/run/dpdk/spdk_pid83637 00:30:58.256 Removing: /var/run/dpdk/spdk_pid84014 00:30:58.256 Removing: /var/run/dpdk/spdk_pid84157 00:30:58.256 Removing: /var/run/dpdk/spdk_pid84945 00:30:58.256 Removing: /var/run/dpdk/spdk_pid85066 00:30:58.256 Removing: /var/run/dpdk/spdk_pid85224 00:30:58.256 Removing: /var/run/dpdk/spdk_pid85327 00:30:58.256 Removing: /var/run/dpdk/spdk_pid85629 00:30:58.256 Removing: /var/run/dpdk/spdk_pid85904 00:30:58.256 Removing: /var/run/dpdk/spdk_pid86234 00:30:58.256 Removing: /var/run/dpdk/spdk_pid86410 00:30:58.256 Removing: /var/run/dpdk/spdk_pid86522 00:30:58.256 Removing: /var/run/dpdk/spdk_pid86558 00:30:58.256 Removing: /var/run/dpdk/spdk_pid86798 00:30:58.256 Removing: /var/run/dpdk/spdk_pid86812 00:30:58.256 Removing: /var/run/dpdk/spdk_pid86859 00:30:58.256 Removing: /var/run/dpdk/spdk_pid87189 00:30:58.256 Removing: /var/run/dpdk/spdk_pid87405 00:30:58.256 Removing: /var/run/dpdk/spdk_pid87723 00:30:58.256 Removing: /var/run/dpdk/spdk_pid88347 00:30:58.256 Removing: /var/run/dpdk/spdk_pid89296 00:30:58.256 Removing: /var/run/dpdk/spdk_pid89779 00:30:58.256 Removing: /var/run/dpdk/spdk_pid89907 00:30:58.256 Removing: /var/run/dpdk/spdk_pid89988 00:30:58.256 Removing: /var/run/dpdk/spdk_pid90445 00:30:58.256 Removing: /var/run/dpdk/spdk_pid90510 00:30:58.256 Removing: /var/run/dpdk/spdk_pid90913 00:30:58.256 Removing: /var/run/dpdk/spdk_pid91179 00:30:58.256 Removing: /var/run/dpdk/spdk_pid91561 00:30:58.256 Removing: /var/run/dpdk/spdk_pid91672 00:30:58.256 Removing: /var/run/dpdk/spdk_pid91707 00:30:58.256 Removing: /var/run/dpdk/spdk_pid91755 00:30:58.256 Removing: /var/run/dpdk/spdk_pid91805 00:30:58.256 Removing: /var/run/dpdk/spdk_pid91854 00:30:58.256 Removing: /var/run/dpdk/spdk_pid92042 00:30:58.256 Removing: /var/run/dpdk/spdk_pid92129 00:30:58.256 Removing: /var/run/dpdk/spdk_pid92186 00:30:58.256 Removing: /var/run/dpdk/spdk_pid92265 00:30:58.256 Removing: /var/run/dpdk/spdk_pid92292 00:30:58.516 Removing: /var/run/dpdk/spdk_pid92348 00:30:58.516 Removing: /var/run/dpdk/spdk_pid92482 00:30:58.516 Removing: /var/run/dpdk/spdk_pid92685 00:30:58.516 Removing: /var/run/dpdk/spdk_pid93252 00:30:58.516 Removing: /var/run/dpdk/spdk_pid93856 00:30:58.516 Removing: /var/run/dpdk/spdk_pid94469 00:30:58.516 Removing: /var/run/dpdk/spdk_pid95083 00:30:58.516 Clean 00:30:58.516 18:05:18 -- common/autotest_common.sh@1451 -- # return 0 00:30:58.516 18:05:18 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:30:58.516 18:05:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:58.516 18:05:18 -- common/autotest_common.sh@10 -- # set +x 00:30:58.516 18:05:18 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:30:58.516 18:05:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:58.516 18:05:18 -- common/autotest_common.sh@10 -- # set +x 00:30:58.516 18:05:18 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:58.516 18:05:18 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:58.516 18:05:18 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:58.516 18:05:18 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:30:58.516 18:05:18 -- spdk/autotest.sh@394 -- # hostname 00:30:58.516 18:05:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:58.776 geninfo: WARNING: invalid characters removed from testname! 00:31:25.415 18:05:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:26.804 18:05:46 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:29.347 18:05:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:31.890 18:05:51 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:34.434 18:05:54 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:36.982 18:05:56 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:39.529 18:05:59 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:39.529 18:05:59 -- spdk/autorun.sh@1 -- $ timing_finish 00:31:39.529 18:05:59 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:31:39.529 18:05:59 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:39.529 18:05:59 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:39.529 18:05:59 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:39.529 + [[ -n 5767 ]] 00:31:39.529 + sudo kill 5767 00:31:39.539 [Pipeline] } 00:31:39.555 [Pipeline] // timeout 00:31:39.561 [Pipeline] } 00:31:39.577 [Pipeline] // stage 00:31:39.582 [Pipeline] } 00:31:39.596 [Pipeline] // catchError 00:31:39.621 [Pipeline] stage 00:31:39.627 [Pipeline] { (Stop VM) 00:31:39.641 [Pipeline] sh 00:31:39.925 + vagrant halt 00:31:42.468 ==> default: Halting domain... 00:31:47.807 [Pipeline] sh 00:31:48.089 + vagrant destroy -f 00:31:50.634 ==> default: Removing domain... 00:31:51.217 [Pipeline] sh 00:31:51.494 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:31:51.504 [Pipeline] } 00:31:51.515 [Pipeline] // stage 00:31:51.520 [Pipeline] } 00:31:51.531 [Pipeline] // dir 00:31:51.535 [Pipeline] } 00:31:51.548 [Pipeline] // wrap 00:31:51.554 [Pipeline] } 00:31:51.565 [Pipeline] // catchError 00:31:51.574 [Pipeline] stage 00:31:51.576 [Pipeline] { (Epilogue) 00:31:51.587 [Pipeline] sh 00:31:51.873 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:57.168 [Pipeline] catchError 00:31:57.170 [Pipeline] { 00:31:57.184 [Pipeline] sh 00:31:57.472 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:58.856 Artifacts sizes are good 00:31:58.867 [Pipeline] } 00:31:58.881 [Pipeline] // catchError 00:31:58.893 [Pipeline] archiveArtifacts 00:31:58.901 Archiving artifacts 00:31:58.999 [Pipeline] cleanWs 00:31:59.011 [WS-CLEANUP] Deleting project workspace... 00:31:59.011 [WS-CLEANUP] Deferred wipeout is used... 00:31:59.033 [WS-CLEANUP] done 00:31:59.035 [Pipeline] } 00:31:59.050 [Pipeline] // stage 00:31:59.055 [Pipeline] } 00:31:59.070 [Pipeline] // node 00:31:59.075 [Pipeline] End of Pipeline 00:31:59.122 Finished: SUCCESS