00:00:00.000 Started by upstream project "autotest-nightly" build number 4242 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3605 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.156 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.157 The recommended git tool is: git 00:00:00.157 using credential 00000000-0000-0000-0000-000000000002 00:00:00.159 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.210 Fetching changes from the remote Git repository 00:00:00.211 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.256 Using shallow fetch with depth 1 00:00:00.256 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.256 > git --version # timeout=10 00:00:00.291 > git --version # 'git version 2.39.2' 00:00:00.291 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.310 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.310 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.785 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.796 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.806 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.806 > git config core.sparsecheckout # timeout=10 00:00:06.818 > git read-tree -mu HEAD # timeout=10 00:00:06.836 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.854 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.854 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.973 [Pipeline] Start of Pipeline 00:00:06.988 [Pipeline] library 00:00:06.990 Loading library shm_lib@master 00:00:06.990 Library shm_lib@master is cached. Copying from home. 00:00:07.005 [Pipeline] node 00:00:07.018 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.020 [Pipeline] { 00:00:07.030 [Pipeline] catchError 00:00:07.031 [Pipeline] { 00:00:07.044 [Pipeline] wrap 00:00:07.054 [Pipeline] { 00:00:07.062 [Pipeline] stage 00:00:07.064 [Pipeline] { (Prologue) 00:00:07.085 [Pipeline] echo 00:00:07.087 Node: VM-host-SM38 00:00:07.093 [Pipeline] cleanWs 00:00:07.103 [WS-CLEANUP] Deleting project workspace... 00:00:07.103 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.110 [WS-CLEANUP] done 00:00:07.371 [Pipeline] setCustomBuildProperty 00:00:07.442 [Pipeline] httpRequest 00:00:07.809 [Pipeline] echo 00:00:07.811 Sorcerer 10.211.164.101 is alive 00:00:07.820 [Pipeline] retry 00:00:07.823 [Pipeline] { 00:00:07.838 [Pipeline] httpRequest 00:00:07.842 HttpMethod: GET 00:00:07.843 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.843 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.850 Response Code: HTTP/1.1 200 OK 00:00:07.850 Success: Status code 200 is in the accepted range: 200,404 00:00:07.851 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:08.660 [Pipeline] } 00:00:08.670 [Pipeline] // retry 00:00:08.674 [Pipeline] sh 00:00:08.957 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:08.968 [Pipeline] httpRequest 00:00:09.601 [Pipeline] echo 00:00:09.602 Sorcerer 10.211.164.101 is alive 00:00:09.610 [Pipeline] retry 00:00:09.612 [Pipeline] { 00:00:09.623 [Pipeline] httpRequest 00:00:09.627 HttpMethod: GET 00:00:09.628 URL: http://10.211.164.101/packages/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:09.629 Sending request to url: http://10.211.164.101/packages/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:00:09.647 Response Code: HTTP/1.1 200 OK 00:00:09.648 Success: Status code 200 is in the accepted range: 200,404 00:00:09.648 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:01:30.214 [Pipeline] } 00:01:30.231 [Pipeline] // retry 00:01:30.239 [Pipeline] sh 00:01:30.525 + tar --no-same-owner -xf spdk_fa3ab73844ced08f4f9487f5de71d477ca5cf604.tar.gz 00:01:33.842 [Pipeline] sh 00:01:34.127 + git -C spdk log --oneline -n5 00:01:34.127 fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:01:34.127 12fc2abf1 test: Remove autopackage.sh 00:01:34.127 83ba90867 fio/bdev: fix typo in README 00:01:34.127 45379ed84 module/compress: Cleanup vol data, when claim fails 00:01:34.127 0afe95a3a bdev/nvme: use bdev_nvme linker script 00:01:34.148 [Pipeline] writeFile 00:01:34.162 [Pipeline] sh 00:01:34.447 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:34.460 [Pipeline] sh 00:01:34.744 + cat autorun-spdk.conf 00:01:34.744 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.744 SPDK_TEST_NVME=1 00:01:34.744 SPDK_TEST_FTL=1 00:01:34.744 SPDK_TEST_ISAL=1 00:01:34.744 SPDK_RUN_ASAN=1 00:01:34.744 SPDK_RUN_UBSAN=1 00:01:34.744 SPDK_TEST_XNVME=1 00:01:34.744 SPDK_TEST_NVME_FDP=1 00:01:34.744 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:34.752 RUN_NIGHTLY=1 00:01:34.754 [Pipeline] } 00:01:34.767 [Pipeline] // stage 00:01:34.782 [Pipeline] stage 00:01:34.784 [Pipeline] { (Run VM) 00:01:34.797 [Pipeline] sh 00:01:35.081 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:35.082 + echo 'Start stage prepare_nvme.sh' 00:01:35.082 Start stage prepare_nvme.sh 00:01:35.082 + [[ -n 9 ]] 00:01:35.082 + disk_prefix=ex9 00:01:35.082 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:35.082 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:35.082 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:35.082 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.082 ++ SPDK_TEST_NVME=1 00:01:35.082 ++ SPDK_TEST_FTL=1 00:01:35.082 ++ SPDK_TEST_ISAL=1 00:01:35.082 ++ SPDK_RUN_ASAN=1 00:01:35.082 ++ SPDK_RUN_UBSAN=1 00:01:35.082 ++ SPDK_TEST_XNVME=1 00:01:35.082 ++ SPDK_TEST_NVME_FDP=1 00:01:35.082 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.082 ++ RUN_NIGHTLY=1 00:01:35.082 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:35.082 + nvme_files=() 00:01:35.082 + declare -A nvme_files 00:01:35.082 + backend_dir=/var/lib/libvirt/images/backends 00:01:35.082 + nvme_files['nvme.img']=5G 00:01:35.082 + nvme_files['nvme-cmb.img']=5G 00:01:35.082 + nvme_files['nvme-multi0.img']=4G 00:01:35.082 + nvme_files['nvme-multi1.img']=4G 00:01:35.082 + nvme_files['nvme-multi2.img']=4G 00:01:35.082 + nvme_files['nvme-openstack.img']=8G 00:01:35.082 + nvme_files['nvme-zns.img']=5G 00:01:35.082 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:35.082 + (( SPDK_TEST_FTL == 1 )) 00:01:35.082 + nvme_files["nvme-ftl.img"]=6G 00:01:35.082 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:35.082 + nvme_files["nvme-fdp.img"]=1G 00:01:35.082 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:35.082 + for nvme in "${!nvme_files[@]}" 00:01:35.082 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:01:35.082 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.082 + for nvme in "${!nvme_files[@]}" 00:01:35.082 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-ftl.img -s 6G 00:01:35.343 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:35.343 + for nvme in "${!nvme_files[@]}" 00:01:35.343 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:01:35.343 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.343 + for nvme in "${!nvme_files[@]}" 00:01:35.343 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:01:35.343 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:35.343 + for nvme in "${!nvme_files[@]}" 00:01:35.343 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:01:36.286 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.286 + for nvme in "${!nvme_files[@]}" 00:01:36.286 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:01:36.286 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.286 + for nvme in "${!nvme_files[@]}" 00:01:36.286 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:01:36.286 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.286 + for nvme in "${!nvme_files[@]}" 00:01:36.286 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-fdp.img -s 1G 00:01:36.286 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:36.286 + for nvme in "${!nvme_files[@]}" 00:01:36.286 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:01:36.858 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.858 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:01:36.858 + echo 'End stage prepare_nvme.sh' 00:01:36.858 End stage prepare_nvme.sh 00:01:36.871 [Pipeline] sh 00:01:37.156 + DISTRO=fedora39 00:01:37.156 + CPUS=10 00:01:37.156 + RAM=12288 00:01:37.156 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:37.156 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex9-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:37.156 00:01:37.156 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:37.156 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:37.157 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:37.157 HELP=0 00:01:37.157 DRY_RUN=0 00:01:37.157 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,/var/lib/libvirt/images/backends/ex9-nvme-fdp.img, 00:01:37.157 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:37.157 NVME_AUTO_CREATE=0 00:01:37.157 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,, 00:01:37.157 NVME_CMB=,,,, 00:01:37.157 NVME_PMR=,,,, 00:01:37.157 NVME_ZNS=,,,, 00:01:37.157 NVME_MS=true,,,, 00:01:37.157 NVME_FDP=,,,on, 00:01:37.157 SPDK_VAGRANT_DISTRO=fedora39 00:01:37.157 SPDK_VAGRANT_VMCPU=10 00:01:37.157 SPDK_VAGRANT_VMRAM=12288 00:01:37.157 SPDK_VAGRANT_PROVIDER=libvirt 00:01:37.157 SPDK_VAGRANT_HTTP_PROXY= 00:01:37.157 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:37.157 SPDK_OPENSTACK_NETWORK=0 00:01:37.157 VAGRANT_PACKAGE_BOX=0 00:01:37.157 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:37.157 FORCE_DISTRO=true 00:01:37.157 VAGRANT_BOX_VERSION= 00:01:37.157 EXTRA_VAGRANTFILES= 00:01:37.157 NIC_MODEL=e1000 00:01:37.157 00:01:37.157 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:37.157 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:39.703 Bringing machine 'default' up with 'libvirt' provider... 00:01:40.276 ==> default: Creating image (snapshot of base box volume). 00:01:40.276 ==> default: Creating domain with the following settings... 00:01:40.276 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730686346_b454560150f6e26f9f78 00:01:40.276 ==> default: -- Domain type: kvm 00:01:40.276 ==> default: -- Cpus: 10 00:01:40.276 ==> default: -- Feature: acpi 00:01:40.276 ==> default: -- Feature: apic 00:01:40.276 ==> default: -- Feature: pae 00:01:40.276 ==> default: -- Memory: 12288M 00:01:40.276 ==> default: -- Memory Backing: hugepages: 00:01:40.276 ==> default: -- Management MAC: 00:01:40.276 ==> default: -- Loader: 00:01:40.276 ==> default: -- Nvram: 00:01:40.276 ==> default: -- Base box: spdk/fedora39 00:01:40.276 ==> default: -- Storage pool: default 00:01:40.276 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730686346_b454560150f6e26f9f78.img (20G) 00:01:40.276 ==> default: -- Volume Cache: default 00:01:40.276 ==> default: -- Kernel: 00:01:40.276 ==> default: -- Initrd: 00:01:40.276 ==> default: -- Graphics Type: vnc 00:01:40.276 ==> default: -- Graphics Port: -1 00:01:40.276 ==> default: -- Graphics IP: 127.0.0.1 00:01:40.276 ==> default: -- Graphics Password: Not defined 00:01:40.276 ==> default: -- Video Type: cirrus 00:01:40.276 ==> default: -- Video VRAM: 9216 00:01:40.276 ==> default: -- Sound Type: 00:01:40.276 ==> default: -- Keymap: en-us 00:01:40.276 ==> default: -- TPM Path: 00:01:40.276 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:40.276 ==> default: -- Command line args: 00:01:40.276 ==> default: -> value=-device, 00:01:40.276 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:40.276 ==> default: -> value=-drive, 00:01:40.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:40.276 ==> default: -> value=-device, 00:01:40.276 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:40.276 ==> default: -> value=-device, 00:01:40.276 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:40.276 ==> default: -> value=-drive, 00:01:40.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-1-drive0, 00:01:40.276 ==> default: -> value=-device, 00:01:40.276 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.276 ==> default: -> value=-device, 00:01:40.276 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:40.276 ==> default: -> value=-drive, 00:01:40.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:40.276 ==> default: -> value=-device, 00:01:40.276 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.276 ==> default: -> value=-drive, 00:01:40.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:40.276 ==> default: -> value=-device, 00:01:40.276 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.276 ==> default: -> value=-drive, 00:01:40.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:40.276 ==> default: -> value=-device, 00:01:40.276 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.276 ==> default: -> value=-device, 00:01:40.276 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:40.276 ==> default: -> value=-device, 00:01:40.276 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:40.276 ==> default: -> value=-drive, 00:01:40.276 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:40.276 ==> default: -> value=-device, 00:01:40.276 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.538 ==> default: Creating shared folders metadata... 00:01:40.538 ==> default: Starting domain. 00:01:42.455 ==> default: Waiting for domain to get an IP address... 00:02:04.437 ==> default: Waiting for SSH to become available... 00:02:04.437 ==> default: Configuring and enabling network interfaces... 00:02:06.997 default: SSH address: 192.168.121.110:22 00:02:06.997 default: SSH username: vagrant 00:02:06.997 default: SSH auth method: private key 00:02:08.914 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:17.064 ==> default: Mounting SSHFS shared folder... 00:02:18.980 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:18.980 ==> default: Checking Mount.. 00:02:20.364 ==> default: Folder Successfully Mounted! 00:02:20.364 00:02:20.364 SUCCESS! 00:02:20.364 00:02:20.364 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:20.364 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:20.364 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:20.364 00:02:20.373 [Pipeline] } 00:02:20.385 [Pipeline] // stage 00:02:20.394 [Pipeline] dir 00:02:20.395 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:20.397 [Pipeline] { 00:02:20.409 [Pipeline] catchError 00:02:20.411 [Pipeline] { 00:02:20.424 [Pipeline] sh 00:02:20.709 + vagrant ssh-config --host vagrant 00:02:20.709 + sed -ne '/^Host/,$p' 00:02:20.709 + tee ssh_conf 00:02:23.255 Host vagrant 00:02:23.255 HostName 192.168.121.110 00:02:23.255 User vagrant 00:02:23.255 Port 22 00:02:23.255 UserKnownHostsFile /dev/null 00:02:23.255 StrictHostKeyChecking no 00:02:23.255 PasswordAuthentication no 00:02:23.255 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:23.255 IdentitiesOnly yes 00:02:23.255 LogLevel FATAL 00:02:23.255 ForwardAgent yes 00:02:23.255 ForwardX11 yes 00:02:23.255 00:02:23.270 [Pipeline] withEnv 00:02:23.272 [Pipeline] { 00:02:23.285 [Pipeline] sh 00:02:23.569 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:23.569 source /etc/os-release 00:02:23.569 [[ -e /image.version ]] && img=$(< /image.version) 00:02:23.569 # Minimal, systemd-like check. 00:02:23.569 if [[ -e /.dockerenv ]]; then 00:02:23.569 # Clear garbage from the node'\''s name: 00:02:23.569 # agt-er_autotest_547-896 -> autotest_547-896 00:02:23.569 # $HOSTNAME is the actual container id 00:02:23.569 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:23.569 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:23.569 # We can assume this is a mount from a host where container is running, 00:02:23.569 # so fetch its hostname to easily identify the target swarm worker. 00:02:23.569 container="$(< /etc/hostname) ($agent)" 00:02:23.569 else 00:02:23.569 # Fallback 00:02:23.569 container=$agent 00:02:23.569 fi 00:02:23.569 fi 00:02:23.569 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:23.569 ' 00:02:23.842 [Pipeline] } 00:02:23.858 [Pipeline] // withEnv 00:02:23.866 [Pipeline] setCustomBuildProperty 00:02:23.881 [Pipeline] stage 00:02:23.883 [Pipeline] { (Tests) 00:02:23.899 [Pipeline] sh 00:02:24.186 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:24.463 [Pipeline] sh 00:02:24.747 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:25.023 [Pipeline] timeout 00:02:25.023 Timeout set to expire in 50 min 00:02:25.025 [Pipeline] { 00:02:25.039 [Pipeline] sh 00:02:25.320 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:25.893 HEAD is now at fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:02:25.905 [Pipeline] sh 00:02:26.191 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:26.467 [Pipeline] sh 00:02:26.752 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:27.030 [Pipeline] sh 00:02:27.317 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:27.578 ++ readlink -f spdk_repo 00:02:27.578 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:27.578 + [[ -n /home/vagrant/spdk_repo ]] 00:02:27.578 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:27.578 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:27.578 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:27.578 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:27.578 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:27.578 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:27.578 + cd /home/vagrant/spdk_repo 00:02:27.578 + source /etc/os-release 00:02:27.578 ++ NAME='Fedora Linux' 00:02:27.578 ++ VERSION='39 (Cloud Edition)' 00:02:27.578 ++ ID=fedora 00:02:27.578 ++ VERSION_ID=39 00:02:27.578 ++ VERSION_CODENAME= 00:02:27.578 ++ PLATFORM_ID=platform:f39 00:02:27.578 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:27.578 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:27.578 ++ LOGO=fedora-logo-icon 00:02:27.578 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:27.578 ++ HOME_URL=https://fedoraproject.org/ 00:02:27.578 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:27.578 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:27.578 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:27.578 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:27.578 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:27.578 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:27.578 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:27.578 ++ SUPPORT_END=2024-11-12 00:02:27.578 ++ VARIANT='Cloud Edition' 00:02:27.578 ++ VARIANT_ID=cloud 00:02:27.578 + uname -a 00:02:27.578 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:27.578 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:27.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:28.102 Hugepages 00:02:28.102 node hugesize free / total 00:02:28.102 node0 1048576kB 0 / 0 00:02:28.364 node0 2048kB 0 / 0 00:02:28.364 00:02:28.364 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:28.364 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:28.364 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:28.364 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:28.364 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:28.364 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:28.364 + rm -f /tmp/spdk-ld-path 00:02:28.364 + source autorun-spdk.conf 00:02:28.364 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.364 ++ SPDK_TEST_NVME=1 00:02:28.364 ++ SPDK_TEST_FTL=1 00:02:28.364 ++ SPDK_TEST_ISAL=1 00:02:28.364 ++ SPDK_RUN_ASAN=1 00:02:28.364 ++ SPDK_RUN_UBSAN=1 00:02:28.364 ++ SPDK_TEST_XNVME=1 00:02:28.364 ++ SPDK_TEST_NVME_FDP=1 00:02:28.364 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:28.364 ++ RUN_NIGHTLY=1 00:02:28.364 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:28.364 + [[ -n '' ]] 00:02:28.364 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:28.364 + for M in /var/spdk/build-*-manifest.txt 00:02:28.364 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:28.364 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.364 + for M in /var/spdk/build-*-manifest.txt 00:02:28.364 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:28.364 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.364 + for M in /var/spdk/build-*-manifest.txt 00:02:28.364 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:28.364 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.364 ++ uname 00:02:28.364 + [[ Linux == \L\i\n\u\x ]] 00:02:28.364 + sudo dmesg -T 00:02:28.364 + sudo dmesg --clear 00:02:28.364 + dmesg_pid=5029 00:02:28.364 + [[ Fedora Linux == FreeBSD ]] 00:02:28.364 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:28.364 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:28.364 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:28.364 + sudo dmesg -Tw 00:02:28.364 + [[ -x /usr/src/fio-static/fio ]] 00:02:28.364 + export FIO_BIN=/usr/src/fio-static/fio 00:02:28.364 + FIO_BIN=/usr/src/fio-static/fio 00:02:28.364 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:28.364 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:28.364 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:28.364 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:28.364 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:28.364 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:28.364 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:28.364 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:28.364 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:28.626 02:13:15 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:28.626 02:13:15 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:28.626 02:13:15 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.626 02:13:15 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:28.626 02:13:15 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:28.626 02:13:15 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:28.626 02:13:15 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:28.626 02:13:15 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:28.626 02:13:15 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:28.626 02:13:15 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:28.626 02:13:15 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:28.626 02:13:15 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:28.626 02:13:15 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:28.626 02:13:15 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:28.626 02:13:15 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:28.626 02:13:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:28.626 02:13:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:28.626 02:13:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:28.626 02:13:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.626 02:13:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.626 02:13:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.626 02:13:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.626 02:13:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.626 02:13:15 -- paths/export.sh@5 -- $ export PATH 00:02:28.626 02:13:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.626 02:13:15 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:28.626 02:13:15 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:28.626 02:13:15 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730686395.XXXXXX 00:02:28.626 02:13:15 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730686395.GTgJwO 00:02:28.626 02:13:15 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:28.626 02:13:15 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:28.626 02:13:15 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:28.626 02:13:15 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:28.626 02:13:15 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:28.626 02:13:15 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:28.626 02:13:15 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:28.626 02:13:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.626 02:13:15 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:28.627 02:13:15 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:28.627 02:13:15 -- pm/common@17 -- $ local monitor 00:02:28.627 02:13:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.627 02:13:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.627 02:13:15 -- pm/common@25 -- $ sleep 1 00:02:28.627 02:13:15 -- pm/common@21 -- $ date +%s 00:02:28.627 02:13:15 -- pm/common@21 -- $ date +%s 00:02:28.627 02:13:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730686395 00:02:28.627 02:13:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730686395 00:02:28.627 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730686395_collect-vmstat.pm.log 00:02:28.627 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730686395_collect-cpu-load.pm.log 00:02:29.571 02:13:16 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:29.571 02:13:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:29.571 02:13:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:29.571 02:13:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:29.571 02:13:16 -- spdk/autobuild.sh@16 -- $ date -u 00:02:29.571 Mon Nov 4 02:13:16 AM UTC 2024 00:02:29.571 02:13:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:29.571 v25.01-pre-124-gfa3ab7384 00:02:29.571 02:13:16 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:29.571 02:13:16 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:29.571 02:13:16 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:29.571 02:13:16 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:29.571 02:13:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.571 ************************************ 00:02:29.571 START TEST asan 00:02:29.571 ************************************ 00:02:29.571 using asan 00:02:29.571 02:13:16 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:29.571 00:02:29.571 real 0m0.000s 00:02:29.571 user 0m0.000s 00:02:29.571 sys 0m0.000s 00:02:29.571 ************************************ 00:02:29.571 END TEST asan 00:02:29.571 ************************************ 00:02:29.571 02:13:16 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:29.571 02:13:16 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:29.835 02:13:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:29.835 02:13:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:29.835 02:13:16 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:29.835 02:13:16 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:29.835 02:13:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.835 ************************************ 00:02:29.835 START TEST ubsan 00:02:29.835 ************************************ 00:02:29.835 using ubsan 00:02:29.835 02:13:16 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:29.835 00:02:29.835 real 0m0.000s 00:02:29.835 user 0m0.000s 00:02:29.835 sys 0m0.000s 00:02:29.835 02:13:16 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:29.835 ************************************ 00:02:29.835 END TEST ubsan 00:02:29.835 ************************************ 00:02:29.835 02:13:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:29.835 02:13:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:29.835 02:13:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:29.835 02:13:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:29.835 02:13:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:29.835 02:13:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:29.835 02:13:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:29.835 02:13:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:29.835 02:13:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:29.835 02:13:16 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:29.835 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:29.835 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:30.406 Using 'verbs' RDMA provider 00:02:41.339 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:51.313 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:51.829 Creating mk/config.mk...done. 00:02:51.829 Creating mk/cc.flags.mk...done. 00:02:51.829 Type 'make' to build. 00:02:51.829 02:13:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:51.829 02:13:38 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:51.829 02:13:38 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:51.829 02:13:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.829 ************************************ 00:02:51.829 START TEST make 00:02:51.829 ************************************ 00:02:51.829 02:13:38 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:52.087 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:52.087 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:52.087 meson setup builddir \ 00:02:52.087 -Dwith-libaio=enabled \ 00:02:52.087 -Dwith-liburing=enabled \ 00:02:52.087 -Dwith-libvfn=disabled \ 00:02:52.087 -Dwith-spdk=disabled \ 00:02:52.087 -Dexamples=false \ 00:02:52.087 -Dtests=false \ 00:02:52.087 -Dtools=false && \ 00:02:52.087 meson compile -C builddir && \ 00:02:52.087 cd -) 00:02:52.087 make[1]: Nothing to be done for 'all'. 00:02:53.987 The Meson build system 00:02:53.987 Version: 1.5.0 00:02:53.987 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:53.987 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:53.987 Build type: native build 00:02:53.987 Project name: xnvme 00:02:53.987 Project version: 0.7.5 00:02:53.987 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:53.987 C linker for the host machine: cc ld.bfd 2.40-14 00:02:53.987 Host machine cpu family: x86_64 00:02:53.987 Host machine cpu: x86_64 00:02:53.987 Message: host_machine.system: linux 00:02:53.987 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:53.987 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:53.987 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:53.987 Run-time dependency threads found: YES 00:02:53.987 Has header "setupapi.h" : NO 00:02:53.987 Has header "linux/blkzoned.h" : YES 00:02:53.987 Has header "linux/blkzoned.h" : YES (cached) 00:02:53.987 Has header "libaio.h" : YES 00:02:53.987 Library aio found: YES 00:02:53.987 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:53.987 Run-time dependency liburing found: YES 2.2 00:02:53.987 Dependency libvfn skipped: feature with-libvfn disabled 00:02:53.987 Found CMake: /usr/bin/cmake (3.27.7) 00:02:53.987 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:53.987 Subproject spdk : skipped: feature with-spdk disabled 00:02:53.987 Run-time dependency appleframeworks found: NO (tried framework) 00:02:53.987 Run-time dependency appleframeworks found: NO (tried framework) 00:02:53.987 Library rt found: YES 00:02:53.987 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:53.987 Configuring xnvme_config.h using configuration 00:02:53.987 Configuring xnvme.spec using configuration 00:02:53.987 Run-time dependency bash-completion found: YES 2.11 00:02:53.987 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:53.987 Program cp found: YES (/usr/bin/cp) 00:02:53.987 Build targets in project: 3 00:02:53.987 00:02:53.987 xnvme 0.7.5 00:02:53.987 00:02:53.987 Subprojects 00:02:53.987 spdk : NO Feature 'with-spdk' disabled 00:02:53.987 00:02:53.987 User defined options 00:02:53.987 examples : false 00:02:53.987 tests : false 00:02:53.987 tools : false 00:02:53.987 with-libaio : enabled 00:02:53.987 with-liburing: enabled 00:02:53.987 with-libvfn : disabled 00:02:53.987 with-spdk : disabled 00:02:53.987 00:02:53.987 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:54.245 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:54.245 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:54.245 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:54.245 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:54.245 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:54.245 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:54.245 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:54.245 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:54.504 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:54.504 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:54.504 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:54.504 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:54.504 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:54.504 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:54.504 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:54.504 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:54.504 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:54.504 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:54.504 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:54.504 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:54.504 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:54.504 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:54.504 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:54.504 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:54.504 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:54.504 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:54.504 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:54.504 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:54.504 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:54.504 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:54.504 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:54.504 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:54.504 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:54.504 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:54.504 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:54.504 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:54.504 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:54.762 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:54.762 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:54.762 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:54.762 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:54.762 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:54.762 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:54.762 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:54.762 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:54.762 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:54.762 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:54.762 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:54.762 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:54.763 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:54.763 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:54.763 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:54.763 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:54.763 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:54.763 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:54.763 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:54.763 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:54.763 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:54.763 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:54.763 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:54.763 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:54.763 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:54.763 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:54.763 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:54.763 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:54.763 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:55.022 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:55.022 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:55.022 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:55.022 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:55.022 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:55.022 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:55.022 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:55.022 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:55.280 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:55.280 [75/76] Linking static target lib/libxnvme.a 00:02:55.280 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:55.280 INFO: autodetecting backend as ninja 00:02:55.280 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:55.280 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:01.839 The Meson build system 00:03:01.839 Version: 1.5.0 00:03:01.839 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:01.839 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:01.839 Build type: native build 00:03:01.839 Program cat found: YES (/usr/bin/cat) 00:03:01.839 Project name: DPDK 00:03:01.839 Project version: 24.03.0 00:03:01.839 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:01.839 C linker for the host machine: cc ld.bfd 2.40-14 00:03:01.839 Host machine cpu family: x86_64 00:03:01.839 Host machine cpu: x86_64 00:03:01.839 Message: ## Building in Developer Mode ## 00:03:01.839 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:01.839 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:01.839 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:01.839 Program python3 found: YES (/usr/bin/python3) 00:03:01.839 Program cat found: YES (/usr/bin/cat) 00:03:01.839 Compiler for C supports arguments -march=native: YES 00:03:01.839 Checking for size of "void *" : 8 00:03:01.839 Checking for size of "void *" : 8 (cached) 00:03:01.839 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:01.839 Library m found: YES 00:03:01.839 Library numa found: YES 00:03:01.839 Has header "numaif.h" : YES 00:03:01.839 Library fdt found: NO 00:03:01.839 Library execinfo found: NO 00:03:01.839 Has header "execinfo.h" : YES 00:03:01.839 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:01.839 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:01.839 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:01.839 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:01.839 Run-time dependency openssl found: YES 3.1.1 00:03:01.839 Run-time dependency libpcap found: YES 1.10.4 00:03:01.839 Has header "pcap.h" with dependency libpcap: YES 00:03:01.839 Compiler for C supports arguments -Wcast-qual: YES 00:03:01.839 Compiler for C supports arguments -Wdeprecated: YES 00:03:01.839 Compiler for C supports arguments -Wformat: YES 00:03:01.839 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:01.839 Compiler for C supports arguments -Wformat-security: NO 00:03:01.839 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:01.839 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:01.839 Compiler for C supports arguments -Wnested-externs: YES 00:03:01.839 Compiler for C supports arguments -Wold-style-definition: YES 00:03:01.839 Compiler for C supports arguments -Wpointer-arith: YES 00:03:01.839 Compiler for C supports arguments -Wsign-compare: YES 00:03:01.839 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:01.839 Compiler for C supports arguments -Wundef: YES 00:03:01.839 Compiler for C supports arguments -Wwrite-strings: YES 00:03:01.839 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:01.839 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:01.839 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:01.839 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:01.839 Program objdump found: YES (/usr/bin/objdump) 00:03:01.839 Compiler for C supports arguments -mavx512f: YES 00:03:01.839 Checking if "AVX512 checking" compiles: YES 00:03:01.839 Fetching value of define "__SSE4_2__" : 1 00:03:01.839 Fetching value of define "__AES__" : 1 00:03:01.839 Fetching value of define "__AVX__" : 1 00:03:01.839 Fetching value of define "__AVX2__" : 1 00:03:01.839 Fetching value of define "__AVX512BW__" : 1 00:03:01.839 Fetching value of define "__AVX512CD__" : 1 00:03:01.839 Fetching value of define "__AVX512DQ__" : 1 00:03:01.839 Fetching value of define "__AVX512F__" : 1 00:03:01.839 Fetching value of define "__AVX512VL__" : 1 00:03:01.839 Fetching value of define "__PCLMUL__" : 1 00:03:01.839 Fetching value of define "__RDRND__" : 1 00:03:01.839 Fetching value of define "__RDSEED__" : 1 00:03:01.839 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:01.839 Fetching value of define "__znver1__" : (undefined) 00:03:01.839 Fetching value of define "__znver2__" : (undefined) 00:03:01.839 Fetching value of define "__znver3__" : (undefined) 00:03:01.839 Fetching value of define "__znver4__" : (undefined) 00:03:01.839 Library asan found: YES 00:03:01.839 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:01.839 Message: lib/log: Defining dependency "log" 00:03:01.839 Message: lib/kvargs: Defining dependency "kvargs" 00:03:01.839 Message: lib/telemetry: Defining dependency "telemetry" 00:03:01.839 Library rt found: YES 00:03:01.839 Checking for function "getentropy" : NO 00:03:01.839 Message: lib/eal: Defining dependency "eal" 00:03:01.839 Message: lib/ring: Defining dependency "ring" 00:03:01.839 Message: lib/rcu: Defining dependency "rcu" 00:03:01.839 Message: lib/mempool: Defining dependency "mempool" 00:03:01.839 Message: lib/mbuf: Defining dependency "mbuf" 00:03:01.839 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:01.839 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:01.839 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:01.839 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:01.839 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:01.839 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:01.839 Compiler for C supports arguments -mpclmul: YES 00:03:01.839 Compiler for C supports arguments -maes: YES 00:03:01.839 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:01.839 Compiler for C supports arguments -mavx512bw: YES 00:03:01.839 Compiler for C supports arguments -mavx512dq: YES 00:03:01.839 Compiler for C supports arguments -mavx512vl: YES 00:03:01.839 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:01.839 Compiler for C supports arguments -mavx2: YES 00:03:01.839 Compiler for C supports arguments -mavx: YES 00:03:01.839 Message: lib/net: Defining dependency "net" 00:03:01.839 Message: lib/meter: Defining dependency "meter" 00:03:01.839 Message: lib/ethdev: Defining dependency "ethdev" 00:03:01.839 Message: lib/pci: Defining dependency "pci" 00:03:01.839 Message: lib/cmdline: Defining dependency "cmdline" 00:03:01.839 Message: lib/hash: Defining dependency "hash" 00:03:01.839 Message: lib/timer: Defining dependency "timer" 00:03:01.839 Message: lib/compressdev: Defining dependency "compressdev" 00:03:01.839 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:01.839 Message: lib/dmadev: Defining dependency "dmadev" 00:03:01.839 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:01.840 Message: lib/power: Defining dependency "power" 00:03:01.840 Message: lib/reorder: Defining dependency "reorder" 00:03:01.840 Message: lib/security: Defining dependency "security" 00:03:01.840 Has header "linux/userfaultfd.h" : YES 00:03:01.840 Has header "linux/vduse.h" : YES 00:03:01.840 Message: lib/vhost: Defining dependency "vhost" 00:03:01.840 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:01.840 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:01.840 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:01.840 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:01.840 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:01.840 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:01.840 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:01.840 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:01.840 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:01.840 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:01.840 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:01.840 Configuring doxy-api-html.conf using configuration 00:03:01.840 Configuring doxy-api-man.conf using configuration 00:03:01.840 Program mandb found: YES (/usr/bin/mandb) 00:03:01.840 Program sphinx-build found: NO 00:03:01.840 Configuring rte_build_config.h using configuration 00:03:01.840 Message: 00:03:01.840 ================= 00:03:01.840 Applications Enabled 00:03:01.840 ================= 00:03:01.840 00:03:01.840 apps: 00:03:01.840 00:03:01.840 00:03:01.840 Message: 00:03:01.840 ================= 00:03:01.840 Libraries Enabled 00:03:01.840 ================= 00:03:01.840 00:03:01.840 libs: 00:03:01.840 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:01.840 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:01.840 cryptodev, dmadev, power, reorder, security, vhost, 00:03:01.840 00:03:01.840 Message: 00:03:01.840 =============== 00:03:01.840 Drivers Enabled 00:03:01.840 =============== 00:03:01.840 00:03:01.840 common: 00:03:01.840 00:03:01.840 bus: 00:03:01.840 pci, vdev, 00:03:01.840 mempool: 00:03:01.840 ring, 00:03:01.840 dma: 00:03:01.840 00:03:01.840 net: 00:03:01.840 00:03:01.840 crypto: 00:03:01.840 00:03:01.840 compress: 00:03:01.840 00:03:01.840 vdpa: 00:03:01.840 00:03:01.840 00:03:01.840 Message: 00:03:01.840 ================= 00:03:01.840 Content Skipped 00:03:01.840 ================= 00:03:01.840 00:03:01.840 apps: 00:03:01.840 dumpcap: explicitly disabled via build config 00:03:01.840 graph: explicitly disabled via build config 00:03:01.840 pdump: explicitly disabled via build config 00:03:01.840 proc-info: explicitly disabled via build config 00:03:01.840 test-acl: explicitly disabled via build config 00:03:01.840 test-bbdev: explicitly disabled via build config 00:03:01.840 test-cmdline: explicitly disabled via build config 00:03:01.840 test-compress-perf: explicitly disabled via build config 00:03:01.840 test-crypto-perf: explicitly disabled via build config 00:03:01.840 test-dma-perf: explicitly disabled via build config 00:03:01.840 test-eventdev: explicitly disabled via build config 00:03:01.840 test-fib: explicitly disabled via build config 00:03:01.840 test-flow-perf: explicitly disabled via build config 00:03:01.840 test-gpudev: explicitly disabled via build config 00:03:01.840 test-mldev: explicitly disabled via build config 00:03:01.840 test-pipeline: explicitly disabled via build config 00:03:01.840 test-pmd: explicitly disabled via build config 00:03:01.840 test-regex: explicitly disabled via build config 00:03:01.840 test-sad: explicitly disabled via build config 00:03:01.840 test-security-perf: explicitly disabled via build config 00:03:01.840 00:03:01.840 libs: 00:03:01.840 argparse: explicitly disabled via build config 00:03:01.840 metrics: explicitly disabled via build config 00:03:01.840 acl: explicitly disabled via build config 00:03:01.840 bbdev: explicitly disabled via build config 00:03:01.840 bitratestats: explicitly disabled via build config 00:03:01.840 bpf: explicitly disabled via build config 00:03:01.840 cfgfile: explicitly disabled via build config 00:03:01.840 distributor: explicitly disabled via build config 00:03:01.840 efd: explicitly disabled via build config 00:03:01.840 eventdev: explicitly disabled via build config 00:03:01.840 dispatcher: explicitly disabled via build config 00:03:01.840 gpudev: explicitly disabled via build config 00:03:01.840 gro: explicitly disabled via build config 00:03:01.840 gso: explicitly disabled via build config 00:03:01.840 ip_frag: explicitly disabled via build config 00:03:01.840 jobstats: explicitly disabled via build config 00:03:01.840 latencystats: explicitly disabled via build config 00:03:01.840 lpm: explicitly disabled via build config 00:03:01.840 member: explicitly disabled via build config 00:03:01.840 pcapng: explicitly disabled via build config 00:03:01.840 rawdev: explicitly disabled via build config 00:03:01.840 regexdev: explicitly disabled via build config 00:03:01.840 mldev: explicitly disabled via build config 00:03:01.840 rib: explicitly disabled via build config 00:03:01.840 sched: explicitly disabled via build config 00:03:01.840 stack: explicitly disabled via build config 00:03:01.840 ipsec: explicitly disabled via build config 00:03:01.840 pdcp: explicitly disabled via build config 00:03:01.840 fib: explicitly disabled via build config 00:03:01.840 port: explicitly disabled via build config 00:03:01.840 pdump: explicitly disabled via build config 00:03:01.840 table: explicitly disabled via build config 00:03:01.840 pipeline: explicitly disabled via build config 00:03:01.840 graph: explicitly disabled via build config 00:03:01.840 node: explicitly disabled via build config 00:03:01.840 00:03:01.840 drivers: 00:03:01.840 common/cpt: not in enabled drivers build config 00:03:01.840 common/dpaax: not in enabled drivers build config 00:03:01.840 common/iavf: not in enabled drivers build config 00:03:01.840 common/idpf: not in enabled drivers build config 00:03:01.840 common/ionic: not in enabled drivers build config 00:03:01.840 common/mvep: not in enabled drivers build config 00:03:01.840 common/octeontx: not in enabled drivers build config 00:03:01.840 bus/auxiliary: not in enabled drivers build config 00:03:01.840 bus/cdx: not in enabled drivers build config 00:03:01.840 bus/dpaa: not in enabled drivers build config 00:03:01.840 bus/fslmc: not in enabled drivers build config 00:03:01.840 bus/ifpga: not in enabled drivers build config 00:03:01.840 bus/platform: not in enabled drivers build config 00:03:01.840 bus/uacce: not in enabled drivers build config 00:03:01.840 bus/vmbus: not in enabled drivers build config 00:03:01.840 common/cnxk: not in enabled drivers build config 00:03:01.840 common/mlx5: not in enabled drivers build config 00:03:01.840 common/nfp: not in enabled drivers build config 00:03:01.840 common/nitrox: not in enabled drivers build config 00:03:01.840 common/qat: not in enabled drivers build config 00:03:01.840 common/sfc_efx: not in enabled drivers build config 00:03:01.840 mempool/bucket: not in enabled drivers build config 00:03:01.840 mempool/cnxk: not in enabled drivers build config 00:03:01.840 mempool/dpaa: not in enabled drivers build config 00:03:01.840 mempool/dpaa2: not in enabled drivers build config 00:03:01.840 mempool/octeontx: not in enabled drivers build config 00:03:01.840 mempool/stack: not in enabled drivers build config 00:03:01.840 dma/cnxk: not in enabled drivers build config 00:03:01.840 dma/dpaa: not in enabled drivers build config 00:03:01.840 dma/dpaa2: not in enabled drivers build config 00:03:01.840 dma/hisilicon: not in enabled drivers build config 00:03:01.840 dma/idxd: not in enabled drivers build config 00:03:01.840 dma/ioat: not in enabled drivers build config 00:03:01.840 dma/skeleton: not in enabled drivers build config 00:03:01.840 net/af_packet: not in enabled drivers build config 00:03:01.840 net/af_xdp: not in enabled drivers build config 00:03:01.840 net/ark: not in enabled drivers build config 00:03:01.840 net/atlantic: not in enabled drivers build config 00:03:01.840 net/avp: not in enabled drivers build config 00:03:01.840 net/axgbe: not in enabled drivers build config 00:03:01.840 net/bnx2x: not in enabled drivers build config 00:03:01.840 net/bnxt: not in enabled drivers build config 00:03:01.840 net/bonding: not in enabled drivers build config 00:03:01.840 net/cnxk: not in enabled drivers build config 00:03:01.840 net/cpfl: not in enabled drivers build config 00:03:01.840 net/cxgbe: not in enabled drivers build config 00:03:01.840 net/dpaa: not in enabled drivers build config 00:03:01.840 net/dpaa2: not in enabled drivers build config 00:03:01.840 net/e1000: not in enabled drivers build config 00:03:01.840 net/ena: not in enabled drivers build config 00:03:01.840 net/enetc: not in enabled drivers build config 00:03:01.840 net/enetfec: not in enabled drivers build config 00:03:01.840 net/enic: not in enabled drivers build config 00:03:01.840 net/failsafe: not in enabled drivers build config 00:03:01.840 net/fm10k: not in enabled drivers build config 00:03:01.840 net/gve: not in enabled drivers build config 00:03:01.840 net/hinic: not in enabled drivers build config 00:03:01.840 net/hns3: not in enabled drivers build config 00:03:01.840 net/i40e: not in enabled drivers build config 00:03:01.840 net/iavf: not in enabled drivers build config 00:03:01.840 net/ice: not in enabled drivers build config 00:03:01.840 net/idpf: not in enabled drivers build config 00:03:01.840 net/igc: not in enabled drivers build config 00:03:01.840 net/ionic: not in enabled drivers build config 00:03:01.840 net/ipn3ke: not in enabled drivers build config 00:03:01.840 net/ixgbe: not in enabled drivers build config 00:03:01.840 net/mana: not in enabled drivers build config 00:03:01.840 net/memif: not in enabled drivers build config 00:03:01.840 net/mlx4: not in enabled drivers build config 00:03:01.840 net/mlx5: not in enabled drivers build config 00:03:01.840 net/mvneta: not in enabled drivers build config 00:03:01.840 net/mvpp2: not in enabled drivers build config 00:03:01.840 net/netvsc: not in enabled drivers build config 00:03:01.840 net/nfb: not in enabled drivers build config 00:03:01.840 net/nfp: not in enabled drivers build config 00:03:01.840 net/ngbe: not in enabled drivers build config 00:03:01.840 net/null: not in enabled drivers build config 00:03:01.840 net/octeontx: not in enabled drivers build config 00:03:01.840 net/octeon_ep: not in enabled drivers build config 00:03:01.840 net/pcap: not in enabled drivers build config 00:03:01.840 net/pfe: not in enabled drivers build config 00:03:01.841 net/qede: not in enabled drivers build config 00:03:01.841 net/ring: not in enabled drivers build config 00:03:01.841 net/sfc: not in enabled drivers build config 00:03:01.841 net/softnic: not in enabled drivers build config 00:03:01.841 net/tap: not in enabled drivers build config 00:03:01.841 net/thunderx: not in enabled drivers build config 00:03:01.841 net/txgbe: not in enabled drivers build config 00:03:01.841 net/vdev_netvsc: not in enabled drivers build config 00:03:01.841 net/vhost: not in enabled drivers build config 00:03:01.841 net/virtio: not in enabled drivers build config 00:03:01.841 net/vmxnet3: not in enabled drivers build config 00:03:01.841 raw/*: missing internal dependency, "rawdev" 00:03:01.841 crypto/armv8: not in enabled drivers build config 00:03:01.841 crypto/bcmfs: not in enabled drivers build config 00:03:01.841 crypto/caam_jr: not in enabled drivers build config 00:03:01.841 crypto/ccp: not in enabled drivers build config 00:03:01.841 crypto/cnxk: not in enabled drivers build config 00:03:01.841 crypto/dpaa_sec: not in enabled drivers build config 00:03:01.841 crypto/dpaa2_sec: not in enabled drivers build config 00:03:01.841 crypto/ipsec_mb: not in enabled drivers build config 00:03:01.841 crypto/mlx5: not in enabled drivers build config 00:03:01.841 crypto/mvsam: not in enabled drivers build config 00:03:01.841 crypto/nitrox: not in enabled drivers build config 00:03:01.841 crypto/null: not in enabled drivers build config 00:03:01.841 crypto/octeontx: not in enabled drivers build config 00:03:01.841 crypto/openssl: not in enabled drivers build config 00:03:01.841 crypto/scheduler: not in enabled drivers build config 00:03:01.841 crypto/uadk: not in enabled drivers build config 00:03:01.841 crypto/virtio: not in enabled drivers build config 00:03:01.841 compress/isal: not in enabled drivers build config 00:03:01.841 compress/mlx5: not in enabled drivers build config 00:03:01.841 compress/nitrox: not in enabled drivers build config 00:03:01.841 compress/octeontx: not in enabled drivers build config 00:03:01.841 compress/zlib: not in enabled drivers build config 00:03:01.841 regex/*: missing internal dependency, "regexdev" 00:03:01.841 ml/*: missing internal dependency, "mldev" 00:03:01.841 vdpa/ifc: not in enabled drivers build config 00:03:01.841 vdpa/mlx5: not in enabled drivers build config 00:03:01.841 vdpa/nfp: not in enabled drivers build config 00:03:01.841 vdpa/sfc: not in enabled drivers build config 00:03:01.841 event/*: missing internal dependency, "eventdev" 00:03:01.841 baseband/*: missing internal dependency, "bbdev" 00:03:01.841 gpu/*: missing internal dependency, "gpudev" 00:03:01.841 00:03:01.841 00:03:01.841 Build targets in project: 84 00:03:01.841 00:03:01.841 DPDK 24.03.0 00:03:01.841 00:03:01.841 User defined options 00:03:01.841 buildtype : debug 00:03:01.841 default_library : shared 00:03:01.841 libdir : lib 00:03:01.841 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:01.841 b_sanitize : address 00:03:01.841 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:01.841 c_link_args : 00:03:01.841 cpu_instruction_set: native 00:03:01.841 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:01.841 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:01.841 enable_docs : false 00:03:01.841 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:01.841 enable_kmods : false 00:03:01.841 max_lcores : 128 00:03:01.841 tests : false 00:03:01.841 00:03:01.841 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:01.841 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:01.841 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:01.841 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:01.841 [3/267] Linking static target lib/librte_kvargs.a 00:03:01.841 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:01.841 [5/267] Linking static target lib/librte_log.a 00:03:01.841 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:02.100 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:02.100 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:02.358 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:02.358 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.358 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:02.358 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:02.358 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:02.358 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:02.358 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:02.358 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:02.358 [17/267] Linking static target lib/librte_telemetry.a 00:03:02.358 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:02.618 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:02.618 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:02.618 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:02.618 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:02.618 [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.878 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:02.878 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:02.878 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:02.878 [27/267] Linking target lib/librte_log.so.24.1 00:03:02.878 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:02.878 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:02.878 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:02.878 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:03.136 [32/267] Linking target lib/librte_kvargs.so.24.1 00:03:03.136 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:03.136 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.136 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:03.136 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:03.136 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:03.136 [38/267] Linking target lib/librte_telemetry.so.24.1 00:03:03.136 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:03.136 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:03.136 [41/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:03.136 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:03.136 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:03.394 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:03.394 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:03.394 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:03.394 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:03.394 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:03.394 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:03.394 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:03.652 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:03.652 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:03.652 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:03.652 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:03.652 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:03.652 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:03.910 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:03.910 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:03.910 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:03.910 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:03.910 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:03.910 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:03.910 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:04.168 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:04.168 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:04.168 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:04.168 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:04.426 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:04.426 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:04.426 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:04.426 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:04.426 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:04.426 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:04.426 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:04.426 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:04.426 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:04.426 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:04.684 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:04.684 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:04.684 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:04.684 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:04.685 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:04.685 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:04.943 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:04.943 [85/267] Linking static target lib/librte_ring.a 00:03:04.943 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:04.943 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:04.943 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:04.943 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:04.943 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:04.943 [91/267] Linking static target lib/librte_eal.a 00:03:04.943 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:04.943 [93/267] Linking static target lib/librte_mempool.a 00:03:05.201 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:05.201 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:05.201 [96/267] Linking static target lib/librte_rcu.a 00:03:05.201 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:05.201 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.459 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:05.459 [100/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:05.459 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:05.459 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:05.459 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:05.459 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:05.459 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:05.459 [106/267] Linking static target lib/librte_net.a 00:03:05.459 [107/267] Linking static target lib/librte_mbuf.a 00:03:05.459 [108/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.743 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:05.743 [110/267] Linking static target lib/librte_meter.a 00:03:05.743 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:05.743 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:06.010 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:06.010 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:06.010 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.010 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.010 [117/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.010 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:06.268 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:06.268 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:06.268 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.268 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:06.268 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:06.527 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:06.527 [125/267] Linking static target lib/librte_pci.a 00:03:06.527 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:06.527 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:06.527 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:06.527 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:06.785 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:06.785 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:06.785 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:06.785 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:06.785 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.785 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:06.785 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:06.785 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:06.785 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:06.785 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:06.785 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:06.785 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:06.785 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:06.785 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:06.785 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:07.044 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:07.044 [146/267] Linking static target lib/librte_cmdline.a 00:03:07.044 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:07.044 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:07.044 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:07.302 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:07.302 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:07.302 [152/267] Linking static target lib/librte_timer.a 00:03:07.302 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:07.302 [154/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:07.302 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:07.561 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:07.561 [157/267] Linking static target lib/librte_compressdev.a 00:03:07.561 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:07.561 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:07.561 [160/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:07.561 [161/267] Linking static target lib/librte_hash.a 00:03:07.820 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:07.820 [163/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.820 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:07.820 [165/267] Linking static target lib/librte_dmadev.a 00:03:07.820 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:07.820 [167/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:07.820 [168/267] Linking static target lib/librte_ethdev.a 00:03:07.820 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:08.078 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:08.078 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:08.078 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:08.078 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.078 [174/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.337 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:08.337 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:08.337 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:08.337 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.337 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:08.337 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:08.337 [181/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:08.337 [182/267] Linking static target lib/librte_cryptodev.a 00:03:08.595 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.595 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:08.595 [185/267] Linking static target lib/librte_power.a 00:03:08.595 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:08.852 [187/267] Linking static target lib/librte_reorder.a 00:03:08.852 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:08.852 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:08.852 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:08.852 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:08.852 [192/267] Linking static target lib/librte_security.a 00:03:09.111 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.111 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:09.369 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:09.369 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.369 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:09.369 [198/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.627 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:09.627 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:09.627 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:09.886 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:09.886 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:09.886 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:09.886 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:10.144 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:10.144 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:10.144 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:10.144 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:10.402 [210/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:10.402 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:10.402 [212/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:10.402 [213/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:10.402 [214/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.402 [215/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.402 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.402 [217/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.402 [218/267] Linking static target drivers/librte_bus_vdev.a 00:03:10.402 [219/267] Linking static target drivers/librte_bus_pci.a 00:03:10.402 [220/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.402 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:10.402 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.402 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.402 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:10.662 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.662 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.920 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:12.297 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.555 [229/267] Linking target lib/librte_eal.so.24.1 00:03:12.555 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:12.555 [231/267] Linking target lib/librte_timer.so.24.1 00:03:12.555 [232/267] Linking target lib/librte_pci.so.24.1 00:03:12.555 [233/267] Linking target lib/librte_ring.so.24.1 00:03:12.555 [234/267] Linking target lib/librte_meter.so.24.1 00:03:12.555 [235/267] Linking target lib/librte_dmadev.so.24.1 00:03:12.555 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:12.555 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:12.812 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:12.812 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:12.812 [240/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:12.812 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:12.812 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:12.812 [243/267] Linking target lib/librte_rcu.so.24.1 00:03:12.812 [244/267] Linking target lib/librte_mempool.so.24.1 00:03:12.812 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:12.812 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:12.812 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:12.812 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:13.070 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:13.070 [250/267] Linking target lib/librte_compressdev.so.24.1 00:03:13.070 [251/267] Linking target lib/librte_reorder.so.24.1 00:03:13.070 [252/267] Linking target lib/librte_net.so.24.1 00:03:13.070 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:13.070 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:13.070 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:13.070 [256/267] Linking target lib/librte_cmdline.so.24.1 00:03:13.070 [257/267] Linking target lib/librte_hash.so.24.1 00:03:13.070 [258/267] Linking target lib/librte_security.so.24.1 00:03:13.329 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:13.329 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.329 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:13.587 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:13.587 [263/267] Linking target lib/librte_power.so.24.1 00:03:13.587 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:13.587 [265/267] Linking static target lib/librte_vhost.a 00:03:14.961 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.961 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:14.961 INFO: autodetecting backend as ninja 00:03:14.961 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:29.830 CC lib/log/log.o 00:03:29.830 CC lib/log/log_deprecated.o 00:03:29.830 CC lib/log/log_flags.o 00:03:29.830 CC lib/ut_mock/mock.o 00:03:29.830 CC lib/ut/ut.o 00:03:29.830 LIB libspdk_ut_mock.a 00:03:29.830 LIB libspdk_ut.a 00:03:29.830 LIB libspdk_log.a 00:03:29.830 SO libspdk_ut_mock.so.6.0 00:03:29.830 SO libspdk_ut.so.2.0 00:03:29.831 SO libspdk_log.so.7.1 00:03:29.831 SYMLINK libspdk_ut_mock.so 00:03:29.831 SYMLINK libspdk_ut.so 00:03:29.831 SYMLINK libspdk_log.so 00:03:29.831 CXX lib/trace_parser/trace.o 00:03:29.831 CC lib/ioat/ioat.o 00:03:29.831 CC lib/dma/dma.o 00:03:29.831 CC lib/util/base64.o 00:03:29.831 CC lib/util/bit_array.o 00:03:29.831 CC lib/util/cpuset.o 00:03:29.831 CC lib/util/crc16.o 00:03:29.831 CC lib/util/crc32.o 00:03:29.831 CC lib/util/crc32c.o 00:03:29.831 CC lib/vfio_user/host/vfio_user_pci.o 00:03:29.831 CC lib/util/crc32_ieee.o 00:03:29.831 CC lib/util/crc64.o 00:03:29.831 CC lib/util/dif.o 00:03:29.831 CC lib/util/fd.o 00:03:29.831 LIB libspdk_dma.a 00:03:29.831 CC lib/util/fd_group.o 00:03:29.831 SO libspdk_dma.so.5.0 00:03:29.831 CC lib/util/file.o 00:03:29.831 CC lib/util/hexlify.o 00:03:29.831 CC lib/util/iov.o 00:03:29.831 SYMLINK libspdk_dma.so 00:03:29.831 CC lib/util/math.o 00:03:29.831 LIB libspdk_ioat.a 00:03:29.831 CC lib/util/net.o 00:03:29.831 SO libspdk_ioat.so.7.0 00:03:29.831 CC lib/vfio_user/host/vfio_user.o 00:03:29.831 CC lib/util/pipe.o 00:03:29.831 SYMLINK libspdk_ioat.so 00:03:29.831 CC lib/util/strerror_tls.o 00:03:29.831 CC lib/util/string.o 00:03:29.831 CC lib/util/uuid.o 00:03:29.831 CC lib/util/xor.o 00:03:29.831 CC lib/util/zipf.o 00:03:29.831 CC lib/util/md5.o 00:03:29.831 LIB libspdk_vfio_user.a 00:03:29.831 SO libspdk_vfio_user.so.5.0 00:03:29.831 SYMLINK libspdk_vfio_user.so 00:03:29.831 LIB libspdk_util.a 00:03:29.831 SO libspdk_util.so.10.0 00:03:29.831 LIB libspdk_trace_parser.a 00:03:29.831 SYMLINK libspdk_util.so 00:03:29.831 SO libspdk_trace_parser.so.6.0 00:03:29.831 SYMLINK libspdk_trace_parser.so 00:03:29.831 CC lib/env_dpdk/env.o 00:03:29.831 CC lib/env_dpdk/memory.o 00:03:29.831 CC lib/env_dpdk/pci.o 00:03:29.831 CC lib/vmd/led.o 00:03:29.831 CC lib/vmd/vmd.o 00:03:29.831 CC lib/rdma_provider/common.o 00:03:29.831 CC lib/conf/conf.o 00:03:29.831 CC lib/idxd/idxd.o 00:03:29.831 CC lib/rdma_utils/rdma_utils.o 00:03:29.831 CC lib/json/json_parse.o 00:03:30.089 CC lib/json/json_util.o 00:03:30.089 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:30.089 LIB libspdk_conf.a 00:03:30.089 CC lib/json/json_write.o 00:03:30.089 SO libspdk_conf.so.6.0 00:03:30.089 LIB libspdk_rdma_utils.a 00:03:30.347 SO libspdk_rdma_utils.so.1.0 00:03:30.347 SYMLINK libspdk_conf.so 00:03:30.347 CC lib/idxd/idxd_user.o 00:03:30.347 LIB libspdk_rdma_provider.a 00:03:30.347 SYMLINK libspdk_rdma_utils.so 00:03:30.347 CC lib/env_dpdk/init.o 00:03:30.347 CC lib/env_dpdk/threads.o 00:03:30.347 CC lib/idxd/idxd_kernel.o 00:03:30.347 SO libspdk_rdma_provider.so.6.0 00:03:30.347 SYMLINK libspdk_rdma_provider.so 00:03:30.347 CC lib/env_dpdk/pci_ioat.o 00:03:30.347 CC lib/env_dpdk/pci_virtio.o 00:03:30.347 LIB libspdk_json.a 00:03:30.347 CC lib/env_dpdk/pci_vmd.o 00:03:30.347 CC lib/env_dpdk/pci_idxd.o 00:03:30.347 SO libspdk_json.so.6.0 00:03:30.606 CC lib/env_dpdk/pci_event.o 00:03:30.606 SYMLINK libspdk_json.so 00:03:30.606 CC lib/env_dpdk/sigbus_handler.o 00:03:30.606 CC lib/env_dpdk/pci_dpdk.o 00:03:30.606 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:30.606 LIB libspdk_idxd.a 00:03:30.606 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:30.606 SO libspdk_idxd.so.12.1 00:03:30.606 LIB libspdk_vmd.a 00:03:30.606 SO libspdk_vmd.so.6.0 00:03:30.606 SYMLINK libspdk_idxd.so 00:03:30.606 CC lib/jsonrpc/jsonrpc_client.o 00:03:30.606 CC lib/jsonrpc/jsonrpc_server.o 00:03:30.606 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:30.606 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:30.606 SYMLINK libspdk_vmd.so 00:03:30.865 LIB libspdk_jsonrpc.a 00:03:31.126 SO libspdk_jsonrpc.so.6.0 00:03:31.126 SYMLINK libspdk_jsonrpc.so 00:03:31.126 LIB libspdk_env_dpdk.a 00:03:31.126 SO libspdk_env_dpdk.so.15.1 00:03:31.386 SYMLINK libspdk_env_dpdk.so 00:03:31.386 CC lib/rpc/rpc.o 00:03:31.386 LIB libspdk_rpc.a 00:03:31.643 SO libspdk_rpc.so.6.0 00:03:31.643 SYMLINK libspdk_rpc.so 00:03:31.643 CC lib/keyring/keyring.o 00:03:31.643 CC lib/keyring/keyring_rpc.o 00:03:31.902 CC lib/notify/notify_rpc.o 00:03:31.902 CC lib/notify/notify.o 00:03:31.902 CC lib/trace/trace.o 00:03:31.902 CC lib/trace/trace_flags.o 00:03:31.902 CC lib/trace/trace_rpc.o 00:03:31.902 LIB libspdk_notify.a 00:03:31.902 SO libspdk_notify.so.6.0 00:03:31.902 LIB libspdk_keyring.a 00:03:31.902 SYMLINK libspdk_notify.so 00:03:31.902 SO libspdk_keyring.so.2.0 00:03:31.902 LIB libspdk_trace.a 00:03:32.160 SO libspdk_trace.so.11.0 00:03:32.160 SYMLINK libspdk_keyring.so 00:03:32.160 SYMLINK libspdk_trace.so 00:03:32.419 CC lib/sock/sock.o 00:03:32.419 CC lib/sock/sock_rpc.o 00:03:32.419 CC lib/thread/thread.o 00:03:32.419 CC lib/thread/iobuf.o 00:03:32.685 LIB libspdk_sock.a 00:03:32.685 SO libspdk_sock.so.10.0 00:03:32.685 SYMLINK libspdk_sock.so 00:03:32.943 CC lib/nvme/nvme_fabric.o 00:03:32.943 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:32.943 CC lib/nvme/nvme_ctrlr.o 00:03:32.943 CC lib/nvme/nvme_ns_cmd.o 00:03:32.943 CC lib/nvme/nvme_qpair.o 00:03:32.943 CC lib/nvme/nvme_ns.o 00:03:32.943 CC lib/nvme/nvme_pcie.o 00:03:32.943 CC lib/nvme/nvme_pcie_common.o 00:03:32.943 CC lib/nvme/nvme.o 00:03:33.508 CC lib/nvme/nvme_quirks.o 00:03:33.508 CC lib/nvme/nvme_transport.o 00:03:33.767 CC lib/nvme/nvme_discovery.o 00:03:33.767 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:33.767 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:33.767 LIB libspdk_thread.a 00:03:33.767 SO libspdk_thread.so.11.0 00:03:33.767 CC lib/nvme/nvme_tcp.o 00:03:33.767 CC lib/nvme/nvme_opal.o 00:03:34.025 SYMLINK libspdk_thread.so 00:03:34.026 CC lib/nvme/nvme_io_msg.o 00:03:34.026 CC lib/nvme/nvme_poll_group.o 00:03:34.026 CC lib/nvme/nvme_zns.o 00:03:34.284 CC lib/nvme/nvme_stubs.o 00:03:34.284 CC lib/nvme/nvme_auth.o 00:03:34.284 CC lib/nvme/nvme_cuse.o 00:03:34.284 CC lib/nvme/nvme_rdma.o 00:03:34.543 CC lib/accel/accel.o 00:03:34.543 CC lib/accel/accel_rpc.o 00:03:34.543 CC lib/accel/accel_sw.o 00:03:34.800 CC lib/blob/blobstore.o 00:03:34.800 CC lib/init/json_config.o 00:03:34.800 CC lib/virtio/virtio.o 00:03:34.800 CC lib/init/subsystem.o 00:03:34.800 CC lib/init/subsystem_rpc.o 00:03:35.058 CC lib/virtio/virtio_vhost_user.o 00:03:35.058 CC lib/init/rpc.o 00:03:35.058 CC lib/blob/request.o 00:03:35.058 CC lib/virtio/virtio_vfio_user.o 00:03:35.058 CC lib/virtio/virtio_pci.o 00:03:35.058 LIB libspdk_init.a 00:03:35.058 SO libspdk_init.so.6.0 00:03:35.317 SYMLINK libspdk_init.so 00:03:35.317 CC lib/fsdev/fsdev.o 00:03:35.317 CC lib/blob/zeroes.o 00:03:35.317 CC lib/fsdev/fsdev_io.o 00:03:35.317 CC lib/blob/blob_bs_dev.o 00:03:35.317 CC lib/fsdev/fsdev_rpc.o 00:03:35.317 LIB libspdk_virtio.a 00:03:35.317 CC lib/event/app.o 00:03:35.317 SO libspdk_virtio.so.7.0 00:03:35.317 CC lib/event/reactor.o 00:03:35.317 SYMLINK libspdk_virtio.so 00:03:35.317 CC lib/event/log_rpc.o 00:03:35.575 CC lib/event/app_rpc.o 00:03:35.575 CC lib/event/scheduler_static.o 00:03:35.575 LIB libspdk_accel.a 00:03:35.575 SO libspdk_accel.so.16.0 00:03:35.575 SYMLINK libspdk_accel.so 00:03:35.834 LIB libspdk_nvme.a 00:03:35.834 LIB libspdk_event.a 00:03:35.834 CC lib/bdev/bdev_rpc.o 00:03:35.834 CC lib/bdev/bdev.o 00:03:35.834 CC lib/bdev/bdev_zone.o 00:03:35.834 CC lib/bdev/part.o 00:03:35.834 CC lib/bdev/scsi_nvme.o 00:03:35.834 SO libspdk_event.so.14.0 00:03:35.834 SO libspdk_nvme.so.14.1 00:03:35.834 LIB libspdk_fsdev.a 00:03:35.834 SO libspdk_fsdev.so.2.0 00:03:35.834 SYMLINK libspdk_event.so 00:03:36.092 SYMLINK libspdk_fsdev.so 00:03:36.092 SYMLINK libspdk_nvme.so 00:03:36.092 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:36.658 LIB libspdk_fuse_dispatcher.a 00:03:36.916 SO libspdk_fuse_dispatcher.so.1.0 00:03:36.916 SYMLINK libspdk_fuse_dispatcher.so 00:03:37.852 LIB libspdk_bdev.a 00:03:37.852 LIB libspdk_blob.a 00:03:38.110 SO libspdk_bdev.so.17.0 00:03:38.110 SO libspdk_blob.so.11.0 00:03:38.110 SYMLINK libspdk_bdev.so 00:03:38.110 SYMLINK libspdk_blob.so 00:03:38.110 CC lib/ftl/ftl_core.o 00:03:38.110 CC lib/nvmf/ctrlr.o 00:03:38.110 CC lib/ublk/ublk.o 00:03:38.110 CC lib/ftl/ftl_init.o 00:03:38.110 CC lib/nvmf/ctrlr_discovery.o 00:03:38.110 CC lib/ublk/ublk_rpc.o 00:03:38.110 CC lib/scsi/dev.o 00:03:38.110 CC lib/nbd/nbd.o 00:03:38.368 CC lib/blobfs/blobfs.o 00:03:38.368 CC lib/lvol/lvol.o 00:03:38.368 CC lib/ftl/ftl_layout.o 00:03:38.368 CC lib/nbd/nbd_rpc.o 00:03:38.368 CC lib/scsi/lun.o 00:03:38.627 CC lib/scsi/port.o 00:03:38.627 CC lib/blobfs/tree.o 00:03:38.627 CC lib/nvmf/ctrlr_bdev.o 00:03:38.627 LIB libspdk_nbd.a 00:03:38.627 SO libspdk_nbd.so.7.0 00:03:38.627 CC lib/ftl/ftl_debug.o 00:03:38.627 CC lib/ftl/ftl_io.o 00:03:38.627 SYMLINK libspdk_nbd.so 00:03:38.627 CC lib/ftl/ftl_sb.o 00:03:38.627 CC lib/ftl/ftl_l2p.o 00:03:38.627 CC lib/scsi/scsi.o 00:03:38.885 LIB libspdk_ublk.a 00:03:38.885 CC lib/scsi/scsi_bdev.o 00:03:38.885 CC lib/scsi/scsi_pr.o 00:03:38.885 SO libspdk_ublk.so.3.0 00:03:38.885 CC lib/ftl/ftl_l2p_flat.o 00:03:38.885 CC lib/nvmf/subsystem.o 00:03:38.885 CC lib/nvmf/nvmf.o 00:03:38.885 SYMLINK libspdk_ublk.so 00:03:38.885 CC lib/nvmf/nvmf_rpc.o 00:03:38.885 LIB libspdk_blobfs.a 00:03:39.143 SO libspdk_blobfs.so.10.0 00:03:39.143 CC lib/ftl/ftl_nv_cache.o 00:03:39.143 SYMLINK libspdk_blobfs.so 00:03:39.143 CC lib/ftl/ftl_band.o 00:03:39.143 CC lib/ftl/ftl_band_ops.o 00:03:39.143 LIB libspdk_lvol.a 00:03:39.143 SO libspdk_lvol.so.10.0 00:03:39.143 SYMLINK libspdk_lvol.so 00:03:39.143 CC lib/ftl/ftl_writer.o 00:03:39.143 CC lib/nvmf/transport.o 00:03:39.402 CC lib/scsi/scsi_rpc.o 00:03:39.402 CC lib/scsi/task.o 00:03:39.402 CC lib/ftl/ftl_rq.o 00:03:39.402 CC lib/ftl/ftl_reloc.o 00:03:39.402 CC lib/ftl/ftl_l2p_cache.o 00:03:39.402 LIB libspdk_scsi.a 00:03:39.402 SO libspdk_scsi.so.9.0 00:03:39.660 CC lib/nvmf/tcp.o 00:03:39.660 SYMLINK libspdk_scsi.so 00:03:39.660 CC lib/nvmf/stubs.o 00:03:39.660 CC lib/ftl/ftl_p2l.o 00:03:39.660 CC lib/nvmf/mdns_server.o 00:03:39.660 CC lib/nvmf/rdma.o 00:03:39.918 CC lib/nvmf/auth.o 00:03:39.918 CC lib/ftl/ftl_p2l_log.o 00:03:39.918 CC lib/ftl/mngt/ftl_mngt.o 00:03:39.918 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:39.918 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:39.918 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:40.176 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:40.176 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:40.176 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:40.176 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:40.176 CC lib/iscsi/conn.o 00:03:40.176 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:40.434 CC lib/vhost/vhost.o 00:03:40.434 CC lib/vhost/vhost_rpc.o 00:03:40.434 CC lib/vhost/vhost_scsi.o 00:03:40.434 CC lib/vhost/vhost_blk.o 00:03:40.434 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:40.434 CC lib/vhost/rte_vhost_user.o 00:03:40.692 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:40.692 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:40.692 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:40.692 CC lib/ftl/utils/ftl_conf.o 00:03:40.692 CC lib/ftl/utils/ftl_md.o 00:03:40.959 CC lib/iscsi/init_grp.o 00:03:40.959 CC lib/ftl/utils/ftl_mempool.o 00:03:40.959 CC lib/ftl/utils/ftl_bitmap.o 00:03:40.959 CC lib/ftl/utils/ftl_property.o 00:03:40.959 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:40.959 CC lib/iscsi/iscsi.o 00:03:40.959 CC lib/iscsi/param.o 00:03:40.959 CC lib/iscsi/portal_grp.o 00:03:40.959 CC lib/iscsi/tgt_node.o 00:03:41.237 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:41.237 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:41.237 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:41.237 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:41.237 LIB libspdk_vhost.a 00:03:41.237 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:41.237 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:41.237 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:41.237 SO libspdk_vhost.so.8.0 00:03:41.237 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:41.237 CC lib/iscsi/iscsi_subsystem.o 00:03:41.237 CC lib/iscsi/iscsi_rpc.o 00:03:41.237 SYMLINK libspdk_vhost.so 00:03:41.237 CC lib/iscsi/task.o 00:03:41.495 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:41.495 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:41.495 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:41.495 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:41.495 CC lib/ftl/base/ftl_base_dev.o 00:03:41.495 CC lib/ftl/base/ftl_base_bdev.o 00:03:41.495 CC lib/ftl/ftl_trace.o 00:03:41.753 LIB libspdk_nvmf.a 00:03:41.753 LIB libspdk_ftl.a 00:03:41.753 SO libspdk_nvmf.so.20.0 00:03:42.012 SO libspdk_ftl.so.9.0 00:03:42.012 SYMLINK libspdk_nvmf.so 00:03:42.012 LIB libspdk_iscsi.a 00:03:42.012 SYMLINK libspdk_ftl.so 00:03:42.271 SO libspdk_iscsi.so.8.0 00:03:42.271 SYMLINK libspdk_iscsi.so 00:03:42.529 CC module/env_dpdk/env_dpdk_rpc.o 00:03:42.529 CC module/accel/iaa/accel_iaa.o 00:03:42.529 CC module/accel/dsa/accel_dsa.o 00:03:42.529 CC module/keyring/file/keyring.o 00:03:42.529 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:42.529 CC module/accel/error/accel_error.o 00:03:42.529 CC module/fsdev/aio/fsdev_aio.o 00:03:42.529 CC module/accel/ioat/accel_ioat.o 00:03:42.529 CC module/blob/bdev/blob_bdev.o 00:03:42.529 CC module/sock/posix/posix.o 00:03:42.787 LIB libspdk_env_dpdk_rpc.a 00:03:42.787 SO libspdk_env_dpdk_rpc.so.6.0 00:03:42.787 SYMLINK libspdk_env_dpdk_rpc.so 00:03:42.787 CC module/keyring/file/keyring_rpc.o 00:03:42.787 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:42.787 CC module/accel/iaa/accel_iaa_rpc.o 00:03:42.787 CC module/accel/ioat/accel_ioat_rpc.o 00:03:42.787 CC module/accel/error/accel_error_rpc.o 00:03:42.787 LIB libspdk_scheduler_dynamic.a 00:03:42.787 SO libspdk_scheduler_dynamic.so.4.0 00:03:42.787 CC module/accel/dsa/accel_dsa_rpc.o 00:03:42.787 LIB libspdk_blob_bdev.a 00:03:42.787 SO libspdk_blob_bdev.so.11.0 00:03:42.787 LIB libspdk_accel_ioat.a 00:03:42.787 LIB libspdk_keyring_file.a 00:03:42.787 SYMLINK libspdk_scheduler_dynamic.so 00:03:42.787 LIB libspdk_accel_error.a 00:03:42.787 LIB libspdk_accel_iaa.a 00:03:42.787 CC module/fsdev/aio/linux_aio_mgr.o 00:03:42.787 SO libspdk_accel_ioat.so.6.0 00:03:42.787 SO libspdk_keyring_file.so.2.0 00:03:42.787 SO libspdk_accel_error.so.2.0 00:03:42.787 SO libspdk_accel_iaa.so.3.0 00:03:42.787 SYMLINK libspdk_blob_bdev.so 00:03:43.044 LIB libspdk_accel_dsa.a 00:03:43.044 SYMLINK libspdk_keyring_file.so 00:03:43.044 SYMLINK libspdk_accel_ioat.so 00:03:43.044 SO libspdk_accel_dsa.so.5.0 00:03:43.044 SYMLINK libspdk_accel_iaa.so 00:03:43.044 SYMLINK libspdk_accel_error.so 00:03:43.044 SYMLINK libspdk_accel_dsa.so 00:03:43.044 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:43.044 CC module/keyring/linux/keyring.o 00:03:43.044 CC module/scheduler/gscheduler/gscheduler.o 00:03:43.045 LIB libspdk_scheduler_dpdk_governor.a 00:03:43.045 CC module/bdev/gpt/gpt.o 00:03:43.045 CC module/bdev/error/vbdev_error.o 00:03:43.045 CC module/bdev/delay/vbdev_delay.o 00:03:43.045 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:43.045 LIB libspdk_fsdev_aio.a 00:03:43.045 CC module/blobfs/bdev/blobfs_bdev.o 00:03:43.045 CC module/keyring/linux/keyring_rpc.o 00:03:43.045 LIB libspdk_scheduler_gscheduler.a 00:03:43.302 SO libspdk_fsdev_aio.so.1.0 00:03:43.302 CC module/bdev/lvol/vbdev_lvol.o 00:03:43.302 SO libspdk_scheduler_gscheduler.so.4.0 00:03:43.302 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:43.302 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:43.302 SYMLINK libspdk_fsdev_aio.so 00:03:43.302 SYMLINK libspdk_scheduler_gscheduler.so 00:03:43.302 CC module/bdev/error/vbdev_error_rpc.o 00:03:43.302 LIB libspdk_keyring_linux.a 00:03:43.302 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:43.302 CC module/bdev/gpt/vbdev_gpt.o 00:03:43.302 SO libspdk_keyring_linux.so.1.0 00:03:43.302 SYMLINK libspdk_keyring_linux.so 00:03:43.302 LIB libspdk_bdev_error.a 00:03:43.302 LIB libspdk_sock_posix.a 00:03:43.302 CC module/bdev/malloc/bdev_malloc.o 00:03:43.302 SO libspdk_bdev_error.so.6.0 00:03:43.303 SO libspdk_sock_posix.so.6.0 00:03:43.303 LIB libspdk_blobfs_bdev.a 00:03:43.561 SO libspdk_blobfs_bdev.so.6.0 00:03:43.561 SYMLINK libspdk_bdev_error.so 00:03:43.561 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:43.561 CC module/bdev/null/bdev_null.o 00:03:43.561 CC module/bdev/nvme/bdev_nvme.o 00:03:43.561 SYMLINK libspdk_blobfs_bdev.so 00:03:43.561 LIB libspdk_bdev_gpt.a 00:03:43.561 SYMLINK libspdk_sock_posix.so 00:03:43.561 SO libspdk_bdev_gpt.so.6.0 00:03:43.561 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:43.561 SYMLINK libspdk_bdev_gpt.so 00:03:43.561 CC module/bdev/nvme/nvme_rpc.o 00:03:43.561 LIB libspdk_bdev_delay.a 00:03:43.561 CC module/bdev/passthru/vbdev_passthru.o 00:03:43.561 CC module/bdev/split/vbdev_split.o 00:03:43.561 SO libspdk_bdev_delay.so.6.0 00:03:43.561 CC module/bdev/raid/bdev_raid.o 00:03:43.561 LIB libspdk_bdev_lvol.a 00:03:43.561 CC module/bdev/null/bdev_null_rpc.o 00:03:43.561 SO libspdk_bdev_lvol.so.6.0 00:03:43.561 SYMLINK libspdk_bdev_delay.so 00:03:43.820 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:43.820 SYMLINK libspdk_bdev_lvol.so 00:03:43.820 CC module/bdev/raid/bdev_raid_rpc.o 00:03:43.820 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:43.820 CC module/bdev/split/vbdev_split_rpc.o 00:03:43.820 CC module/bdev/nvme/bdev_mdns_client.o 00:03:43.820 LIB libspdk_bdev_null.a 00:03:43.820 SO libspdk_bdev_null.so.6.0 00:03:43.820 CC module/bdev/raid/bdev_raid_sb.o 00:03:43.820 LIB libspdk_bdev_malloc.a 00:03:43.820 CC module/bdev/nvme/vbdev_opal.o 00:03:43.820 LIB libspdk_bdev_split.a 00:03:43.820 SO libspdk_bdev_malloc.so.6.0 00:03:43.820 SYMLINK libspdk_bdev_null.so 00:03:43.820 LIB libspdk_bdev_passthru.a 00:03:43.820 SO libspdk_bdev_split.so.6.0 00:03:43.820 SO libspdk_bdev_passthru.so.6.0 00:03:44.078 SYMLINK libspdk_bdev_malloc.so 00:03:44.078 CC module/bdev/raid/raid0.o 00:03:44.078 SYMLINK libspdk_bdev_split.so 00:03:44.078 CC module/bdev/raid/raid1.o 00:03:44.078 SYMLINK libspdk_bdev_passthru.so 00:03:44.078 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:44.078 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:44.078 CC module/bdev/xnvme/bdev_xnvme.o 00:03:44.078 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:44.078 CC module/bdev/aio/bdev_aio.o 00:03:44.078 CC module/bdev/raid/concat.o 00:03:44.078 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:44.337 CC module/bdev/aio/bdev_aio_rpc.o 00:03:44.337 CC module/bdev/ftl/bdev_ftl.o 00:03:44.337 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:44.337 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:44.337 CC module/bdev/iscsi/bdev_iscsi.o 00:03:44.337 LIB libspdk_bdev_xnvme.a 00:03:44.337 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:44.337 SO libspdk_bdev_xnvme.so.3.0 00:03:44.337 LIB libspdk_bdev_aio.a 00:03:44.337 SO libspdk_bdev_aio.so.6.0 00:03:44.337 SYMLINK libspdk_bdev_xnvme.so 00:03:44.337 LIB libspdk_bdev_raid.a 00:03:44.337 SYMLINK libspdk_bdev_aio.so 00:03:44.595 LIB libspdk_bdev_zone_block.a 00:03:44.595 SO libspdk_bdev_raid.so.6.0 00:03:44.596 SO libspdk_bdev_zone_block.so.6.0 00:03:44.596 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:44.596 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:44.596 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:44.596 SYMLINK libspdk_bdev_zone_block.so 00:03:44.596 SYMLINK libspdk_bdev_raid.so 00:03:44.596 LIB libspdk_bdev_ftl.a 00:03:44.596 SO libspdk_bdev_ftl.so.6.0 00:03:44.596 LIB libspdk_bdev_iscsi.a 00:03:44.596 SYMLINK libspdk_bdev_ftl.so 00:03:44.854 SO libspdk_bdev_iscsi.so.6.0 00:03:44.854 SYMLINK libspdk_bdev_iscsi.so 00:03:45.112 LIB libspdk_bdev_virtio.a 00:03:45.112 SO libspdk_bdev_virtio.so.6.0 00:03:45.112 SYMLINK libspdk_bdev_virtio.so 00:03:45.683 LIB libspdk_bdev_nvme.a 00:03:45.943 SO libspdk_bdev_nvme.so.7.1 00:03:45.943 SYMLINK libspdk_bdev_nvme.so 00:03:46.509 CC module/event/subsystems/fsdev/fsdev.o 00:03:46.509 CC module/event/subsystems/sock/sock.o 00:03:46.509 CC module/event/subsystems/scheduler/scheduler.o 00:03:46.509 CC module/event/subsystems/keyring/keyring.o 00:03:46.509 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:46.509 CC module/event/subsystems/iobuf/iobuf.o 00:03:46.509 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:46.509 CC module/event/subsystems/vmd/vmd.o 00:03:46.509 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:46.509 LIB libspdk_event_fsdev.a 00:03:46.509 LIB libspdk_event_keyring.a 00:03:46.510 LIB libspdk_event_vhost_blk.a 00:03:46.510 LIB libspdk_event_scheduler.a 00:03:46.510 LIB libspdk_event_sock.a 00:03:46.510 LIB libspdk_event_vmd.a 00:03:46.510 LIB libspdk_event_iobuf.a 00:03:46.510 SO libspdk_event_keyring.so.1.0 00:03:46.510 SO libspdk_event_scheduler.so.4.0 00:03:46.510 SO libspdk_event_vhost_blk.so.3.0 00:03:46.510 SO libspdk_event_fsdev.so.1.0 00:03:46.510 SO libspdk_event_sock.so.5.0 00:03:46.510 SO libspdk_event_vmd.so.6.0 00:03:46.510 SO libspdk_event_iobuf.so.3.0 00:03:46.510 SYMLINK libspdk_event_fsdev.so 00:03:46.510 SYMLINK libspdk_event_scheduler.so 00:03:46.510 SYMLINK libspdk_event_vhost_blk.so 00:03:46.510 SYMLINK libspdk_event_keyring.so 00:03:46.510 SYMLINK libspdk_event_sock.so 00:03:46.510 SYMLINK libspdk_event_vmd.so 00:03:46.510 SYMLINK libspdk_event_iobuf.so 00:03:46.770 CC module/event/subsystems/accel/accel.o 00:03:47.030 LIB libspdk_event_accel.a 00:03:47.030 SO libspdk_event_accel.so.6.0 00:03:47.030 SYMLINK libspdk_event_accel.so 00:03:47.291 CC module/event/subsystems/bdev/bdev.o 00:03:47.291 LIB libspdk_event_bdev.a 00:03:47.291 SO libspdk_event_bdev.so.6.0 00:03:47.552 SYMLINK libspdk_event_bdev.so 00:03:47.552 CC module/event/subsystems/scsi/scsi.o 00:03:47.552 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:47.552 CC module/event/subsystems/ublk/ublk.o 00:03:47.552 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:47.552 CC module/event/subsystems/nbd/nbd.o 00:03:47.811 LIB libspdk_event_ublk.a 00:03:47.811 LIB libspdk_event_scsi.a 00:03:47.811 LIB libspdk_event_nbd.a 00:03:47.811 SO libspdk_event_ublk.so.3.0 00:03:47.811 SO libspdk_event_scsi.so.6.0 00:03:47.811 SO libspdk_event_nbd.so.6.0 00:03:47.811 SYMLINK libspdk_event_ublk.so 00:03:47.811 SYMLINK libspdk_event_scsi.so 00:03:47.811 SYMLINK libspdk_event_nbd.so 00:03:47.811 LIB libspdk_event_nvmf.a 00:03:47.811 SO libspdk_event_nvmf.so.6.0 00:03:47.811 SYMLINK libspdk_event_nvmf.so 00:03:48.069 CC module/event/subsystems/iscsi/iscsi.o 00:03:48.069 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:48.069 LIB libspdk_event_iscsi.a 00:03:48.069 LIB libspdk_event_vhost_scsi.a 00:03:48.069 SO libspdk_event_iscsi.so.6.0 00:03:48.069 SO libspdk_event_vhost_scsi.so.3.0 00:03:48.069 SYMLINK libspdk_event_iscsi.so 00:03:48.069 SYMLINK libspdk_event_vhost_scsi.so 00:03:48.327 SO libspdk.so.6.0 00:03:48.327 SYMLINK libspdk.so 00:03:48.590 TEST_HEADER include/spdk/accel.h 00:03:48.590 CXX app/trace/trace.o 00:03:48.590 TEST_HEADER include/spdk/accel_module.h 00:03:48.590 TEST_HEADER include/spdk/assert.h 00:03:48.590 TEST_HEADER include/spdk/barrier.h 00:03:48.590 CC test/rpc_client/rpc_client_test.o 00:03:48.590 TEST_HEADER include/spdk/base64.h 00:03:48.590 TEST_HEADER include/spdk/bdev.h 00:03:48.590 TEST_HEADER include/spdk/bdev_module.h 00:03:48.590 TEST_HEADER include/spdk/bdev_zone.h 00:03:48.590 TEST_HEADER include/spdk/bit_array.h 00:03:48.590 TEST_HEADER include/spdk/bit_pool.h 00:03:48.590 TEST_HEADER include/spdk/blob_bdev.h 00:03:48.590 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:48.590 TEST_HEADER include/spdk/blobfs.h 00:03:48.590 TEST_HEADER include/spdk/blob.h 00:03:48.590 TEST_HEADER include/spdk/conf.h 00:03:48.590 TEST_HEADER include/spdk/config.h 00:03:48.590 TEST_HEADER include/spdk/cpuset.h 00:03:48.590 TEST_HEADER include/spdk/crc16.h 00:03:48.590 TEST_HEADER include/spdk/crc32.h 00:03:48.590 TEST_HEADER include/spdk/crc64.h 00:03:48.590 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:48.590 TEST_HEADER include/spdk/dif.h 00:03:48.590 TEST_HEADER include/spdk/dma.h 00:03:48.590 TEST_HEADER include/spdk/endian.h 00:03:48.590 TEST_HEADER include/spdk/env_dpdk.h 00:03:48.590 TEST_HEADER include/spdk/env.h 00:03:48.590 TEST_HEADER include/spdk/event.h 00:03:48.590 TEST_HEADER include/spdk/fd_group.h 00:03:48.590 TEST_HEADER include/spdk/fd.h 00:03:48.590 TEST_HEADER include/spdk/file.h 00:03:48.590 TEST_HEADER include/spdk/fsdev.h 00:03:48.590 TEST_HEADER include/spdk/fsdev_module.h 00:03:48.590 TEST_HEADER include/spdk/ftl.h 00:03:48.590 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:48.590 TEST_HEADER include/spdk/gpt_spec.h 00:03:48.590 TEST_HEADER include/spdk/hexlify.h 00:03:48.590 TEST_HEADER include/spdk/histogram_data.h 00:03:48.590 TEST_HEADER include/spdk/idxd.h 00:03:48.590 CC examples/ioat/perf/perf.o 00:03:48.590 TEST_HEADER include/spdk/idxd_spec.h 00:03:48.590 TEST_HEADER include/spdk/init.h 00:03:48.590 CC test/thread/poller_perf/poller_perf.o 00:03:48.590 TEST_HEADER include/spdk/ioat.h 00:03:48.590 TEST_HEADER include/spdk/ioat_spec.h 00:03:48.590 TEST_HEADER include/spdk/iscsi_spec.h 00:03:48.590 CC examples/util/zipf/zipf.o 00:03:48.590 TEST_HEADER include/spdk/json.h 00:03:48.590 TEST_HEADER include/spdk/jsonrpc.h 00:03:48.590 TEST_HEADER include/spdk/keyring.h 00:03:48.590 TEST_HEADER include/spdk/keyring_module.h 00:03:48.590 TEST_HEADER include/spdk/likely.h 00:03:48.590 TEST_HEADER include/spdk/log.h 00:03:48.590 TEST_HEADER include/spdk/lvol.h 00:03:48.590 TEST_HEADER include/spdk/md5.h 00:03:48.590 TEST_HEADER include/spdk/memory.h 00:03:48.590 TEST_HEADER include/spdk/mmio.h 00:03:48.590 TEST_HEADER include/spdk/nbd.h 00:03:48.590 TEST_HEADER include/spdk/net.h 00:03:48.590 TEST_HEADER include/spdk/notify.h 00:03:48.590 TEST_HEADER include/spdk/nvme.h 00:03:48.590 CC test/dma/test_dma/test_dma.o 00:03:48.590 TEST_HEADER include/spdk/nvme_intel.h 00:03:48.590 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:48.590 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:48.590 TEST_HEADER include/spdk/nvme_spec.h 00:03:48.590 TEST_HEADER include/spdk/nvme_zns.h 00:03:48.590 CC test/app/bdev_svc/bdev_svc.o 00:03:48.590 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:48.590 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:48.590 TEST_HEADER include/spdk/nvmf.h 00:03:48.590 TEST_HEADER include/spdk/nvmf_spec.h 00:03:48.590 TEST_HEADER include/spdk/nvmf_transport.h 00:03:48.590 TEST_HEADER include/spdk/opal.h 00:03:48.590 TEST_HEADER include/spdk/opal_spec.h 00:03:48.590 TEST_HEADER include/spdk/pci_ids.h 00:03:48.590 TEST_HEADER include/spdk/pipe.h 00:03:48.590 TEST_HEADER include/spdk/queue.h 00:03:48.590 TEST_HEADER include/spdk/reduce.h 00:03:48.590 TEST_HEADER include/spdk/rpc.h 00:03:48.590 TEST_HEADER include/spdk/scheduler.h 00:03:48.590 TEST_HEADER include/spdk/scsi.h 00:03:48.590 TEST_HEADER include/spdk/scsi_spec.h 00:03:48.590 TEST_HEADER include/spdk/sock.h 00:03:48.590 TEST_HEADER include/spdk/stdinc.h 00:03:48.590 TEST_HEADER include/spdk/string.h 00:03:48.590 TEST_HEADER include/spdk/thread.h 00:03:48.590 TEST_HEADER include/spdk/trace.h 00:03:48.590 TEST_HEADER include/spdk/trace_parser.h 00:03:48.590 CC test/env/mem_callbacks/mem_callbacks.o 00:03:48.590 TEST_HEADER include/spdk/tree.h 00:03:48.590 TEST_HEADER include/spdk/ublk.h 00:03:48.590 TEST_HEADER include/spdk/util.h 00:03:48.590 TEST_HEADER include/spdk/uuid.h 00:03:48.590 TEST_HEADER include/spdk/version.h 00:03:48.590 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:48.590 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:48.590 TEST_HEADER include/spdk/vhost.h 00:03:48.590 LINK rpc_client_test 00:03:48.590 TEST_HEADER include/spdk/vmd.h 00:03:48.590 TEST_HEADER include/spdk/xor.h 00:03:48.590 TEST_HEADER include/spdk/zipf.h 00:03:48.590 CXX test/cpp_headers/accel.o 00:03:48.590 LINK poller_perf 00:03:48.590 LINK interrupt_tgt 00:03:48.590 LINK zipf 00:03:48.849 LINK bdev_svc 00:03:48.849 LINK ioat_perf 00:03:48.849 CXX test/cpp_headers/accel_module.o 00:03:48.849 LINK spdk_trace 00:03:48.849 CC test/env/vtophys/vtophys.o 00:03:48.849 CC examples/ioat/verify/verify.o 00:03:48.849 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:48.849 CXX test/cpp_headers/assert.o 00:03:48.849 CC test/env/memory/memory_ut.o 00:03:48.849 CC test/event/event_perf/event_perf.o 00:03:49.107 LINK vtophys 00:03:49.107 LINK test_dma 00:03:49.107 CXX test/cpp_headers/barrier.o 00:03:49.107 LINK env_dpdk_post_init 00:03:49.107 CC app/trace_record/trace_record.o 00:03:49.107 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:49.107 LINK mem_callbacks 00:03:49.107 LINK event_perf 00:03:49.107 LINK verify 00:03:49.107 CXX test/cpp_headers/base64.o 00:03:49.107 CXX test/cpp_headers/bdev.o 00:03:49.107 CC test/event/reactor/reactor.o 00:03:49.107 CC test/event/reactor_perf/reactor_perf.o 00:03:49.365 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:49.365 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:49.365 LINK spdk_trace_record 00:03:49.365 LINK reactor 00:03:49.365 CXX test/cpp_headers/bdev_module.o 00:03:49.365 LINK reactor_perf 00:03:49.365 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:49.365 CC examples/thread/thread/thread_ex.o 00:03:49.365 CC examples/sock/hello_world/hello_sock.o 00:03:49.365 LINK nvme_fuzz 00:03:49.365 CC test/env/pci/pci_ut.o 00:03:49.365 CXX test/cpp_headers/bdev_zone.o 00:03:49.624 CC app/nvmf_tgt/nvmf_main.o 00:03:49.624 CC test/event/app_repeat/app_repeat.o 00:03:49.624 LINK thread 00:03:49.624 CXX test/cpp_headers/bit_array.o 00:03:49.624 LINK hello_sock 00:03:49.624 CC test/event/scheduler/scheduler.o 00:03:49.624 LINK app_repeat 00:03:49.624 LINK nvmf_tgt 00:03:49.624 LINK vhost_fuzz 00:03:49.624 CXX test/cpp_headers/bit_pool.o 00:03:49.882 CXX test/cpp_headers/blob_bdev.o 00:03:49.882 CXX test/cpp_headers/blobfs_bdev.o 00:03:49.882 LINK pci_ut 00:03:49.882 LINK scheduler 00:03:49.882 CC examples/vmd/lsvmd/lsvmd.o 00:03:49.882 CXX test/cpp_headers/blobfs.o 00:03:49.882 CC app/iscsi_tgt/iscsi_tgt.o 00:03:49.882 CC examples/vmd/led/led.o 00:03:49.882 CC examples/idxd/perf/perf.o 00:03:49.882 CXX test/cpp_headers/blob.o 00:03:49.882 LINK memory_ut 00:03:50.141 CXX test/cpp_headers/conf.o 00:03:50.141 LINK lsvmd 00:03:50.141 LINK led 00:03:50.141 LINK iscsi_tgt 00:03:50.141 CXX test/cpp_headers/config.o 00:03:50.141 CXX test/cpp_headers/cpuset.o 00:03:50.141 CXX test/cpp_headers/crc16.o 00:03:50.141 CXX test/cpp_headers/crc32.o 00:03:50.141 CC test/accel/dif/dif.o 00:03:50.141 CC test/blobfs/mkfs/mkfs.o 00:03:50.400 CXX test/cpp_headers/crc64.o 00:03:50.400 CXX test/cpp_headers/dif.o 00:03:50.400 CC test/nvme/aer/aer.o 00:03:50.400 CXX test/cpp_headers/dma.o 00:03:50.400 LINK idxd_perf 00:03:50.400 CC test/lvol/esnap/esnap.o 00:03:50.400 LINK mkfs 00:03:50.400 CXX test/cpp_headers/endian.o 00:03:50.400 CC app/spdk_tgt/spdk_tgt.o 00:03:50.400 CXX test/cpp_headers/env_dpdk.o 00:03:50.400 CC test/app/histogram_perf/histogram_perf.o 00:03:50.400 LINK aer 00:03:50.400 CXX test/cpp_headers/env.o 00:03:50.658 CXX test/cpp_headers/event.o 00:03:50.658 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:50.658 LINK histogram_perf 00:03:50.658 LINK spdk_tgt 00:03:50.658 CC app/spdk_lspci/spdk_lspci.o 00:03:50.658 CXX test/cpp_headers/fd_group.o 00:03:50.658 CXX test/cpp_headers/fd.o 00:03:50.658 CC test/nvme/reset/reset.o 00:03:50.658 CXX test/cpp_headers/file.o 00:03:50.658 LINK spdk_lspci 00:03:50.658 CXX test/cpp_headers/fsdev.o 00:03:50.934 CXX test/cpp_headers/fsdev_module.o 00:03:50.934 LINK hello_fsdev 00:03:50.934 CXX test/cpp_headers/ftl.o 00:03:50.934 CXX test/cpp_headers/fuse_dispatcher.o 00:03:50.934 LINK dif 00:03:50.934 CXX test/cpp_headers/gpt_spec.o 00:03:50.934 CC app/spdk_nvme_perf/perf.o 00:03:50.934 CXX test/cpp_headers/hexlify.o 00:03:50.934 LINK iscsi_fuzz 00:03:50.934 LINK reset 00:03:50.934 CXX test/cpp_headers/histogram_data.o 00:03:50.934 CXX test/cpp_headers/idxd.o 00:03:50.934 CXX test/cpp_headers/idxd_spec.o 00:03:51.222 CC examples/accel/perf/accel_perf.o 00:03:51.222 CXX test/cpp_headers/init.o 00:03:51.222 CC test/nvme/sgl/sgl.o 00:03:51.222 CC test/nvme/e2edp/nvme_dp.o 00:03:51.222 CC test/nvme/overhead/overhead.o 00:03:51.222 CC test/app/jsoncat/jsoncat.o 00:03:51.222 CC test/bdev/bdevio/bdevio.o 00:03:51.222 CC test/nvme/err_injection/err_injection.o 00:03:51.222 CXX test/cpp_headers/ioat.o 00:03:51.222 LINK jsoncat 00:03:51.481 CXX test/cpp_headers/ioat_spec.o 00:03:51.481 LINK err_injection 00:03:51.481 LINK sgl 00:03:51.481 LINK nvme_dp 00:03:51.481 LINK overhead 00:03:51.481 CXX test/cpp_headers/iscsi_spec.o 00:03:51.481 LINK accel_perf 00:03:51.481 CC test/app/stub/stub.o 00:03:51.481 CXX test/cpp_headers/json.o 00:03:51.740 CC test/nvme/startup/startup.o 00:03:51.740 CXX test/cpp_headers/jsonrpc.o 00:03:51.740 LINK bdevio 00:03:51.740 CXX test/cpp_headers/keyring.o 00:03:51.740 CC test/nvme/reserve/reserve.o 00:03:51.740 LINK stub 00:03:51.740 CXX test/cpp_headers/keyring_module.o 00:03:51.740 CXX test/cpp_headers/likely.o 00:03:51.740 LINK startup 00:03:51.740 LINK spdk_nvme_perf 00:03:51.740 CXX test/cpp_headers/log.o 00:03:51.740 CXX test/cpp_headers/lvol.o 00:03:51.740 CC examples/blob/hello_world/hello_blob.o 00:03:51.998 CXX test/cpp_headers/md5.o 00:03:51.998 CXX test/cpp_headers/memory.o 00:03:51.998 LINK reserve 00:03:51.998 CC examples/blob/cli/blobcli.o 00:03:51.998 CXX test/cpp_headers/mmio.o 00:03:51.998 CC app/spdk_nvme_identify/identify.o 00:03:51.998 CC examples/nvme/hello_world/hello_world.o 00:03:51.998 CC examples/nvme/reconnect/reconnect.o 00:03:51.998 LINK hello_blob 00:03:51.998 CC test/nvme/simple_copy/simple_copy.o 00:03:51.998 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:52.256 CXX test/cpp_headers/nbd.o 00:03:52.256 CC examples/nvme/arbitration/arbitration.o 00:03:52.256 CXX test/cpp_headers/net.o 00:03:52.256 CXX test/cpp_headers/notify.o 00:03:52.256 LINK hello_world 00:03:52.256 LINK simple_copy 00:03:52.256 CC examples/nvme/hotplug/hotplug.o 00:03:52.256 CXX test/cpp_headers/nvme.o 00:03:52.514 CXX test/cpp_headers/nvme_intel.o 00:03:52.515 LINK blobcli 00:03:52.515 LINK reconnect 00:03:52.515 LINK arbitration 00:03:52.515 LINK nvme_manage 00:03:52.515 CXX test/cpp_headers/nvme_ocssd.o 00:03:52.515 CC test/nvme/connect_stress/connect_stress.o 00:03:52.515 LINK hotplug 00:03:52.515 CC test/nvme/boot_partition/boot_partition.o 00:03:52.515 CC test/nvme/compliance/nvme_compliance.o 00:03:52.772 CC test/nvme/fused_ordering/fused_ordering.o 00:03:52.772 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:52.772 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:52.772 LINK connect_stress 00:03:52.772 CC examples/bdev/hello_world/hello_bdev.o 00:03:52.772 CXX test/cpp_headers/nvme_spec.o 00:03:52.772 LINK boot_partition 00:03:52.772 LINK spdk_nvme_identify 00:03:52.772 LINK cmb_copy 00:03:52.772 CXX test/cpp_headers/nvme_zns.o 00:03:52.772 LINK fused_ordering 00:03:52.772 LINK nvme_compliance 00:03:53.029 CC app/spdk_nvme_discover/discovery_aer.o 00:03:53.029 CXX test/cpp_headers/nvmf_cmd.o 00:03:53.029 LINK hello_bdev 00:03:53.029 CC app/spdk_top/spdk_top.o 00:03:53.029 CC examples/nvme/abort/abort.o 00:03:53.030 CC examples/bdev/bdevperf/bdevperf.o 00:03:53.030 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:53.030 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:53.030 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:53.030 CXX test/cpp_headers/nvmf.o 00:03:53.030 LINK spdk_nvme_discover 00:03:53.030 CXX test/cpp_headers/nvmf_spec.o 00:03:53.030 CXX test/cpp_headers/nvmf_transport.o 00:03:53.288 CXX test/cpp_headers/opal.o 00:03:53.288 LINK pmr_persistence 00:03:53.288 CXX test/cpp_headers/opal_spec.o 00:03:53.288 LINK doorbell_aers 00:03:53.288 CXX test/cpp_headers/pci_ids.o 00:03:53.288 CXX test/cpp_headers/pipe.o 00:03:53.288 LINK abort 00:03:53.288 CXX test/cpp_headers/queue.o 00:03:53.288 CXX test/cpp_headers/reduce.o 00:03:53.288 CXX test/cpp_headers/rpc.o 00:03:53.288 CXX test/cpp_headers/scheduler.o 00:03:53.288 CXX test/cpp_headers/scsi.o 00:03:53.288 CC test/nvme/fdp/fdp.o 00:03:53.546 CXX test/cpp_headers/scsi_spec.o 00:03:53.546 CXX test/cpp_headers/sock.o 00:03:53.546 CXX test/cpp_headers/stdinc.o 00:03:53.546 CXX test/cpp_headers/string.o 00:03:53.546 CC test/nvme/cuse/cuse.o 00:03:53.546 CXX test/cpp_headers/thread.o 00:03:53.546 CC app/vhost/vhost.o 00:03:53.546 LINK fdp 00:03:53.546 CXX test/cpp_headers/trace.o 00:03:53.804 LINK spdk_top 00:03:53.804 CXX test/cpp_headers/trace_parser.o 00:03:53.804 CC app/spdk_dd/spdk_dd.o 00:03:53.804 LINK vhost 00:03:53.804 LINK bdevperf 00:03:53.804 CXX test/cpp_headers/tree.o 00:03:53.804 CXX test/cpp_headers/ublk.o 00:03:53.804 CC app/fio/nvme/fio_plugin.o 00:03:53.804 CXX test/cpp_headers/util.o 00:03:53.804 CXX test/cpp_headers/uuid.o 00:03:53.804 CC app/fio/bdev/fio_plugin.o 00:03:53.804 CXX test/cpp_headers/version.o 00:03:53.804 CXX test/cpp_headers/vfio_user_pci.o 00:03:53.804 CXX test/cpp_headers/vfio_user_spec.o 00:03:53.804 CXX test/cpp_headers/vhost.o 00:03:54.062 CXX test/cpp_headers/vmd.o 00:03:54.062 CXX test/cpp_headers/xor.o 00:03:54.062 CXX test/cpp_headers/zipf.o 00:03:54.062 LINK spdk_dd 00:03:54.062 CC examples/nvmf/nvmf/nvmf.o 00:03:54.320 LINK spdk_bdev 00:03:54.321 LINK spdk_nvme 00:03:54.321 LINK nvmf 00:03:54.579 LINK cuse 00:03:54.840 LINK esnap 00:03:55.100 00:03:55.100 real 1m3.390s 00:03:55.100 user 5m57.004s 00:03:55.100 sys 1m6.652s 00:03:55.359 02:14:42 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:55.359 02:14:42 make -- common/autotest_common.sh@10 -- $ set +x 00:03:55.359 ************************************ 00:03:55.359 END TEST make 00:03:55.359 ************************************ 00:03:55.359 02:14:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:55.359 02:14:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:55.359 02:14:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:55.359 02:14:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.359 02:14:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:55.359 02:14:42 -- pm/common@44 -- $ pid=5071 00:03:55.359 02:14:42 -- pm/common@50 -- $ kill -TERM 5071 00:03:55.359 02:14:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.359 02:14:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:55.359 02:14:42 -- pm/common@44 -- $ pid=5073 00:03:55.359 02:14:42 -- pm/common@50 -- $ kill -TERM 5073 00:03:55.360 02:14:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:55.360 02:14:42 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:55.360 02:14:42 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:55.360 02:14:42 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:55.360 02:14:42 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:55.360 02:14:42 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:55.360 02:14:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.360 02:14:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.360 02:14:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.360 02:14:42 -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.360 02:14:42 -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.360 02:14:42 -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.360 02:14:42 -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.360 02:14:42 -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.360 02:14:42 -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.360 02:14:42 -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.360 02:14:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.360 02:14:42 -- scripts/common.sh@344 -- # case "$op" in 00:03:55.360 02:14:42 -- scripts/common.sh@345 -- # : 1 00:03:55.360 02:14:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.360 02:14:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.360 02:14:42 -- scripts/common.sh@365 -- # decimal 1 00:03:55.360 02:14:42 -- scripts/common.sh@353 -- # local d=1 00:03:55.360 02:14:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.360 02:14:42 -- scripts/common.sh@355 -- # echo 1 00:03:55.360 02:14:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.360 02:14:42 -- scripts/common.sh@366 -- # decimal 2 00:03:55.360 02:14:42 -- scripts/common.sh@353 -- # local d=2 00:03:55.360 02:14:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.360 02:14:42 -- scripts/common.sh@355 -- # echo 2 00:03:55.360 02:14:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.360 02:14:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.360 02:14:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.360 02:14:42 -- scripts/common.sh@368 -- # return 0 00:03:55.360 02:14:42 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.360 02:14:42 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:55.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.360 --rc genhtml_branch_coverage=1 00:03:55.360 --rc genhtml_function_coverage=1 00:03:55.360 --rc genhtml_legend=1 00:03:55.360 --rc geninfo_all_blocks=1 00:03:55.360 --rc geninfo_unexecuted_blocks=1 00:03:55.360 00:03:55.360 ' 00:03:55.360 02:14:42 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:55.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.360 --rc genhtml_branch_coverage=1 00:03:55.360 --rc genhtml_function_coverage=1 00:03:55.360 --rc genhtml_legend=1 00:03:55.360 --rc geninfo_all_blocks=1 00:03:55.360 --rc geninfo_unexecuted_blocks=1 00:03:55.360 00:03:55.360 ' 00:03:55.360 02:14:42 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:55.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.360 --rc genhtml_branch_coverage=1 00:03:55.360 --rc genhtml_function_coverage=1 00:03:55.360 --rc genhtml_legend=1 00:03:55.360 --rc geninfo_all_blocks=1 00:03:55.360 --rc geninfo_unexecuted_blocks=1 00:03:55.360 00:03:55.360 ' 00:03:55.360 02:14:42 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:55.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.360 --rc genhtml_branch_coverage=1 00:03:55.360 --rc genhtml_function_coverage=1 00:03:55.360 --rc genhtml_legend=1 00:03:55.360 --rc geninfo_all_blocks=1 00:03:55.360 --rc geninfo_unexecuted_blocks=1 00:03:55.360 00:03:55.360 ' 00:03:55.360 02:14:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:55.360 02:14:42 -- nvmf/common.sh@7 -- # uname -s 00:03:55.360 02:14:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:55.360 02:14:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:55.360 02:14:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:55.360 02:14:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:55.360 02:14:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:55.360 02:14:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:55.360 02:14:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:55.360 02:14:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:55.360 02:14:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:55.360 02:14:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:55.360 02:14:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:518f79be-9b30-4da0-8ec1-5dca1ca1a8ef 00:03:55.360 02:14:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=518f79be-9b30-4da0-8ec1-5dca1ca1a8ef 00:03:55.360 02:14:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:55.360 02:14:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:55.360 02:14:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:55.360 02:14:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:55.360 02:14:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:55.360 02:14:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:55.360 02:14:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:55.360 02:14:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:55.360 02:14:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:55.360 02:14:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.360 02:14:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.360 02:14:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.360 02:14:42 -- paths/export.sh@5 -- # export PATH 00:03:55.360 02:14:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.360 02:14:42 -- nvmf/common.sh@51 -- # : 0 00:03:55.360 02:14:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:55.360 02:14:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:55.360 02:14:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:55.360 02:14:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:55.360 02:14:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:55.360 02:14:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:55.360 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:55.360 02:14:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:55.360 02:14:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:55.360 02:14:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:55.360 02:14:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:55.360 02:14:42 -- spdk/autotest.sh@32 -- # uname -s 00:03:55.360 02:14:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:55.360 02:14:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:55.360 02:14:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:55.360 02:14:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:55.360 02:14:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:55.360 02:14:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:55.621 02:14:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:55.621 02:14:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:55.621 02:14:42 -- spdk/autotest.sh@48 -- # udevadm_pid=54217 00:03:55.621 02:14:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:55.621 02:14:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:55.621 02:14:42 -- pm/common@17 -- # local monitor 00:03:55.621 02:14:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.621 02:14:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.621 02:14:42 -- pm/common@25 -- # sleep 1 00:03:55.621 02:14:42 -- pm/common@21 -- # date +%s 00:03:55.621 02:14:42 -- pm/common@21 -- # date +%s 00:03:55.621 02:14:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730686482 00:03:55.621 02:14:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730686482 00:03:55.621 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730686482_collect-vmstat.pm.log 00:03:55.621 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730686482_collect-cpu-load.pm.log 00:03:56.562 02:14:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:56.562 02:14:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:56.562 02:14:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.562 02:14:43 -- common/autotest_common.sh@10 -- # set +x 00:03:56.562 02:14:43 -- spdk/autotest.sh@59 -- # create_test_list 00:03:56.562 02:14:43 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:56.562 02:14:43 -- common/autotest_common.sh@10 -- # set +x 00:03:56.563 02:14:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:56.563 02:14:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:56.563 02:14:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:56.563 02:14:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:56.563 02:14:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:56.563 02:14:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:56.563 02:14:43 -- common/autotest_common.sh@1455 -- # uname 00:03:56.563 02:14:43 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:56.563 02:14:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:56.563 02:14:43 -- common/autotest_common.sh@1475 -- # uname 00:03:56.563 02:14:43 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:56.563 02:14:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:56.563 02:14:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:56.563 lcov: LCOV version 1.15 00:03:56.563 02:14:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:11.475 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:11.475 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:29.640 02:15:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:29.640 02:15:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:29.640 02:15:14 -- common/autotest_common.sh@10 -- # set +x 00:04:29.640 02:15:14 -- spdk/autotest.sh@78 -- # rm -f 00:04:29.640 02:15:14 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.640 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:29.640 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:29.640 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:29.640 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:29.640 02:15:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:29.640 02:15:15 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:29.640 02:15:15 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:29.640 02:15:15 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:29.640 02:15:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:29.640 02:15:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:29.640 02:15:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:29.640 02:15:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:29.640 02:15:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:29.640 02:15:15 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:29.640 02:15:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:29.640 02:15:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:29.640 02:15:15 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:29.640 02:15:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:29.640 02:15:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:29.640 02:15:15 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:29.640 02:15:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:29.640 02:15:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:29.640 02:15:15 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:29.640 02:15:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:29.640 02:15:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:29.640 02:15:15 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:29.640 02:15:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:29.640 02:15:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:29.640 02:15:15 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:29.640 02:15:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:29.640 02:15:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:29.640 02:15:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:29.640 02:15:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:29.640 02:15:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:29.640 02:15:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:29.640 02:15:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:29.640 02:15:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:29.640 No valid GPT data, bailing 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # pt= 00:04:29.640 02:15:15 -- scripts/common.sh@395 -- # return 1 00:04:29.640 02:15:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:29.640 1+0 records in 00:04:29.640 1+0 records out 00:04:29.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262136 s, 40.0 MB/s 00:04:29.640 02:15:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:29.640 02:15:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:29.640 02:15:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:29.640 02:15:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:29.640 02:15:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:29.640 No valid GPT data, bailing 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # pt= 00:04:29.640 02:15:15 -- scripts/common.sh@395 -- # return 1 00:04:29.640 02:15:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:29.640 1+0 records in 00:04:29.640 1+0 records out 00:04:29.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00535137 s, 196 MB/s 00:04:29.640 02:15:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:29.640 02:15:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:29.640 02:15:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:29.640 02:15:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:29.640 02:15:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:29.640 No valid GPT data, bailing 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # pt= 00:04:29.640 02:15:15 -- scripts/common.sh@395 -- # return 1 00:04:29.640 02:15:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:29.640 1+0 records in 00:04:29.640 1+0 records out 00:04:29.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00547817 s, 191 MB/s 00:04:29.640 02:15:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:29.640 02:15:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:29.640 02:15:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:29.640 02:15:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:29.640 02:15:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:29.640 No valid GPT data, bailing 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # pt= 00:04:29.640 02:15:15 -- scripts/common.sh@395 -- # return 1 00:04:29.640 02:15:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:29.640 1+0 records in 00:04:29.640 1+0 records out 00:04:29.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509402 s, 206 MB/s 00:04:29.640 02:15:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:29.640 02:15:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:29.640 02:15:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:29.640 02:15:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:29.640 02:15:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:29.640 No valid GPT data, bailing 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # pt= 00:04:29.640 02:15:15 -- scripts/common.sh@395 -- # return 1 00:04:29.640 02:15:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:29.640 1+0 records in 00:04:29.640 1+0 records out 00:04:29.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054026 s, 194 MB/s 00:04:29.640 02:15:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:29.640 02:15:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:29.640 02:15:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:29.640 02:15:15 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:29.640 02:15:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:29.640 No valid GPT data, bailing 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:29.640 02:15:15 -- scripts/common.sh@394 -- # pt= 00:04:29.640 02:15:15 -- scripts/common.sh@395 -- # return 1 00:04:29.640 02:15:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:29.640 1+0 records in 00:04:29.640 1+0 records out 00:04:29.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00588505 s, 178 MB/s 00:04:29.640 02:15:15 -- spdk/autotest.sh@105 -- # sync 00:04:29.640 02:15:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:29.640 02:15:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:29.640 02:15:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:30.211 02:15:17 -- spdk/autotest.sh@111 -- # uname -s 00:04:30.211 02:15:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:30.211 02:15:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:30.211 02:15:17 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:30.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.045 Hugepages 00:04:31.045 node hugesize free / total 00:04:31.045 node0 1048576kB 0 / 0 00:04:31.045 node0 2048kB 0 / 0 00:04:31.045 00:04:31.045 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:31.306 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:31.306 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:31.306 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:31.306 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:31.568 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:31.568 02:15:18 -- spdk/autotest.sh@117 -- # uname -s 00:04:31.568 02:15:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:31.568 02:15:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:31.568 02:15:18 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.400 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.400 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.400 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.661 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.661 02:15:19 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:33.623 02:15:20 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:33.623 02:15:20 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:33.623 02:15:20 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:33.623 02:15:20 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:33.623 02:15:20 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:33.623 02:15:20 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:33.623 02:15:20 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.623 02:15:20 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:33.623 02:15:20 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:33.623 02:15:20 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:33.623 02:15:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:33.623 02:15:20 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.144 Waiting for block devices as requested 00:04:34.144 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.144 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.404 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.404 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.695 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:39.695 02:15:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:39.695 02:15:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:39.695 02:15:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.695 02:15:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:39.695 02:15:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:39.695 02:15:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:39.695 02:15:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:39.695 02:15:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:39.695 02:15:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:39.695 02:15:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:39.695 02:15:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:39.695 02:15:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:39.695 02:15:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:39.695 02:15:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:39.695 02:15:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:39.695 02:15:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:39.695 02:15:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:39.695 02:15:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:39.695 02:15:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:39.695 02:15:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:39.695 02:15:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:39.695 02:15:26 -- common/autotest_common.sh@1541 -- # continue 00:04:39.695 02:15:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:39.695 02:15:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:39.695 02:15:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.695 02:15:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:39.695 02:15:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:39.695 02:15:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:39.695 02:15:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:39.695 02:15:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:39.695 02:15:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:39.695 02:15:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:39.695 02:15:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:39.695 02:15:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:39.696 02:15:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:39.696 02:15:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:39.696 02:15:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:39.696 02:15:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:39.696 02:15:26 -- common/autotest_common.sh@1541 -- # continue 00:04:39.696 02:15:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:39.696 02:15:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:39.696 02:15:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:39.696 02:15:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.696 02:15:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:39.696 02:15:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:39.696 02:15:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:39.696 02:15:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:39.696 02:15:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:39.696 02:15:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:39.696 02:15:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:39.696 02:15:26 -- common/autotest_common.sh@1541 -- # continue 00:04:39.696 02:15:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:39.696 02:15:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:39.696 02:15:26 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:39.696 02:15:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.696 02:15:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:39.696 02:15:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:39.696 02:15:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:39.696 02:15:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:39.696 02:15:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:39.696 02:15:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:39.696 02:15:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:39.696 02:15:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:39.696 02:15:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:39.696 02:15:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:39.696 02:15:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:39.696 02:15:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:39.696 02:15:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:39.696 02:15:26 -- common/autotest_common.sh@1541 -- # continue 00:04:39.696 02:15:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:39.696 02:15:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:39.696 02:15:26 -- common/autotest_common.sh@10 -- # set +x 00:04:39.696 02:15:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:39.696 02:15:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.696 02:15:26 -- common/autotest_common.sh@10 -- # set +x 00:04:39.696 02:15:26 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.528 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.528 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.528 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.528 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.789 02:15:27 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:40.789 02:15:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:40.789 02:15:27 -- common/autotest_common.sh@10 -- # set +x 00:04:40.789 02:15:27 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:40.789 02:15:27 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:40.789 02:15:27 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:40.789 02:15:27 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:40.789 02:15:27 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:40.789 02:15:27 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:40.789 02:15:27 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:40.789 02:15:27 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:40.789 02:15:27 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:40.789 02:15:27 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:40.789 02:15:27 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.789 02:15:27 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:40.789 02:15:27 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:40.789 02:15:27 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:40.789 02:15:27 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:40.789 02:15:27 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:40.789 02:15:27 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:40.789 02:15:27 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:40.789 02:15:27 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.789 02:15:27 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:40.789 02:15:27 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:40.789 02:15:27 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:40.789 02:15:27 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.789 02:15:27 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:40.789 02:15:27 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:40.789 02:15:27 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:40.789 02:15:27 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.789 02:15:27 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:40.789 02:15:27 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:40.789 02:15:27 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:40.789 02:15:27 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:40.789 02:15:27 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:40.789 02:15:27 -- common/autotest_common.sh@1570 -- # return 0 00:04:40.789 02:15:27 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:40.789 02:15:27 -- common/autotest_common.sh@1578 -- # return 0 00:04:40.789 02:15:27 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:40.789 02:15:27 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:40.789 02:15:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:40.789 02:15:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:40.789 02:15:27 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:40.789 02:15:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:40.789 02:15:27 -- common/autotest_common.sh@10 -- # set +x 00:04:40.789 02:15:27 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:40.789 02:15:27 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:40.789 02:15:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.789 02:15:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.789 02:15:27 -- common/autotest_common.sh@10 -- # set +x 00:04:40.789 ************************************ 00:04:40.789 START TEST env 00:04:40.789 ************************************ 00:04:40.789 02:15:27 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.050 * Looking for test storage... 00:04:41.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:41.050 02:15:27 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:41.050 02:15:27 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:41.050 02:15:27 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:41.050 02:15:27 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:41.050 02:15:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.050 02:15:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.050 02:15:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.050 02:15:27 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.050 02:15:27 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.050 02:15:27 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.050 02:15:27 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.050 02:15:27 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.050 02:15:27 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.050 02:15:27 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.050 02:15:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.050 02:15:27 env -- scripts/common.sh@344 -- # case "$op" in 00:04:41.050 02:15:27 env -- scripts/common.sh@345 -- # : 1 00:04:41.050 02:15:27 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.050 02:15:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.050 02:15:27 env -- scripts/common.sh@365 -- # decimal 1 00:04:41.050 02:15:27 env -- scripts/common.sh@353 -- # local d=1 00:04:41.050 02:15:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.050 02:15:27 env -- scripts/common.sh@355 -- # echo 1 00:04:41.050 02:15:27 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.050 02:15:27 env -- scripts/common.sh@366 -- # decimal 2 00:04:41.050 02:15:27 env -- scripts/common.sh@353 -- # local d=2 00:04:41.050 02:15:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.050 02:15:27 env -- scripts/common.sh@355 -- # echo 2 00:04:41.050 02:15:27 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.050 02:15:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.050 02:15:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.050 02:15:27 env -- scripts/common.sh@368 -- # return 0 00:04:41.050 02:15:27 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.050 02:15:27 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:41.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.050 --rc genhtml_branch_coverage=1 00:04:41.050 --rc genhtml_function_coverage=1 00:04:41.050 --rc genhtml_legend=1 00:04:41.050 --rc geninfo_all_blocks=1 00:04:41.050 --rc geninfo_unexecuted_blocks=1 00:04:41.050 00:04:41.050 ' 00:04:41.050 02:15:27 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:41.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.050 --rc genhtml_branch_coverage=1 00:04:41.050 --rc genhtml_function_coverage=1 00:04:41.050 --rc genhtml_legend=1 00:04:41.050 --rc geninfo_all_blocks=1 00:04:41.050 --rc geninfo_unexecuted_blocks=1 00:04:41.050 00:04:41.050 ' 00:04:41.050 02:15:27 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:41.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.050 --rc genhtml_branch_coverage=1 00:04:41.050 --rc genhtml_function_coverage=1 00:04:41.050 --rc genhtml_legend=1 00:04:41.050 --rc geninfo_all_blocks=1 00:04:41.050 --rc geninfo_unexecuted_blocks=1 00:04:41.050 00:04:41.050 ' 00:04:41.050 02:15:27 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:41.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.050 --rc genhtml_branch_coverage=1 00:04:41.050 --rc genhtml_function_coverage=1 00:04:41.050 --rc genhtml_legend=1 00:04:41.050 --rc geninfo_all_blocks=1 00:04:41.050 --rc geninfo_unexecuted_blocks=1 00:04:41.050 00:04:41.050 ' 00:04:41.050 02:15:27 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.050 02:15:27 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.050 02:15:27 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.050 02:15:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.050 ************************************ 00:04:41.050 START TEST env_memory 00:04:41.050 ************************************ 00:04:41.050 02:15:28 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.050 00:04:41.050 00:04:41.050 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.050 http://cunit.sourceforge.net/ 00:04:41.050 00:04:41.050 00:04:41.050 Suite: memory 00:04:41.050 Test: alloc and free memory map ...[2024-11-04 02:15:28.061396] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:41.050 passed 00:04:41.050 Test: mem map translation ...[2024-11-04 02:15:28.099966] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:41.050 [2024-11-04 02:15:28.100007] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:41.050 [2024-11-04 02:15:28.100065] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:41.050 [2024-11-04 02:15:28.100081] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:41.050 passed 00:04:41.312 Test: mem map registration ...[2024-11-04 02:15:28.167981] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:41.312 [2024-11-04 02:15:28.168023] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:41.312 passed 00:04:41.312 Test: mem map adjacent registrations ...passed 00:04:41.312 00:04:41.312 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.312 suites 1 1 n/a 0 0 00:04:41.312 tests 4 4 4 0 0 00:04:41.312 asserts 152 152 152 0 n/a 00:04:41.312 00:04:41.312 Elapsed time = 0.232 seconds 00:04:41.312 ************************************ 00:04:41.312 END TEST env_memory 00:04:41.312 ************************************ 00:04:41.312 00:04:41.312 real 0m0.268s 00:04:41.312 user 0m0.239s 00:04:41.312 sys 0m0.023s 00:04:41.312 02:15:28 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.312 02:15:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:41.312 02:15:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:41.312 02:15:28 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.312 02:15:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.312 02:15:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.312 ************************************ 00:04:41.312 START TEST env_vtophys 00:04:41.312 ************************************ 00:04:41.312 02:15:28 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:41.312 EAL: lib.eal log level changed from notice to debug 00:04:41.312 EAL: Detected lcore 0 as core 0 on socket 0 00:04:41.312 EAL: Detected lcore 1 as core 0 on socket 0 00:04:41.312 EAL: Detected lcore 2 as core 0 on socket 0 00:04:41.312 EAL: Detected lcore 3 as core 0 on socket 0 00:04:41.312 EAL: Detected lcore 4 as core 0 on socket 0 00:04:41.312 EAL: Detected lcore 5 as core 0 on socket 0 00:04:41.312 EAL: Detected lcore 6 as core 0 on socket 0 00:04:41.312 EAL: Detected lcore 7 as core 0 on socket 0 00:04:41.312 EAL: Detected lcore 8 as core 0 on socket 0 00:04:41.312 EAL: Detected lcore 9 as core 0 on socket 0 00:04:41.312 EAL: Maximum logical cores by configuration: 128 00:04:41.312 EAL: Detected CPU lcores: 10 00:04:41.312 EAL: Detected NUMA nodes: 1 00:04:41.312 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:41.312 EAL: Detected shared linkage of DPDK 00:04:41.312 EAL: No shared files mode enabled, IPC will be disabled 00:04:41.312 EAL: Selected IOVA mode 'PA' 00:04:41.312 EAL: Probing VFIO support... 00:04:41.312 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:41.312 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:41.312 EAL: Ask a virtual area of 0x2e000 bytes 00:04:41.312 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:41.312 EAL: Setting up physically contiguous memory... 00:04:41.312 EAL: Setting maximum number of open files to 524288 00:04:41.312 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:41.312 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:41.312 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.312 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:41.312 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.312 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.312 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:41.312 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:41.312 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.312 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:41.312 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.312 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.312 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:41.312 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:41.312 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.312 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:41.312 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.312 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.312 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:41.312 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:41.312 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.312 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:41.312 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.312 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.312 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:41.312 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:41.312 EAL: Hugepages will be freed exactly as allocated. 00:04:41.312 EAL: No shared files mode enabled, IPC is disabled 00:04:41.312 EAL: No shared files mode enabled, IPC is disabled 00:04:41.573 EAL: TSC frequency is ~2600000 KHz 00:04:41.573 EAL: Main lcore 0 is ready (tid=7fc1f04dda40;cpuset=[0]) 00:04:41.573 EAL: Trying to obtain current memory policy. 00:04:41.573 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.573 EAL: Restoring previous memory policy: 0 00:04:41.573 EAL: request: mp_malloc_sync 00:04:41.573 EAL: No shared files mode enabled, IPC is disabled 00:04:41.573 EAL: Heap on socket 0 was expanded by 2MB 00:04:41.573 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:41.573 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:41.573 EAL: Mem event callback 'spdk:(nil)' registered 00:04:41.573 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:41.573 00:04:41.573 00:04:41.573 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.573 http://cunit.sourceforge.net/ 00:04:41.573 00:04:41.573 00:04:41.573 Suite: components_suite 00:04:41.834 Test: vtophys_malloc_test ...passed 00:04:41.834 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:41.834 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.834 EAL: Restoring previous memory policy: 4 00:04:41.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.834 EAL: request: mp_malloc_sync 00:04:41.834 EAL: No shared files mode enabled, IPC is disabled 00:04:41.834 EAL: Heap on socket 0 was expanded by 4MB 00:04:41.834 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.834 EAL: request: mp_malloc_sync 00:04:41.835 EAL: No shared files mode enabled, IPC is disabled 00:04:41.835 EAL: Heap on socket 0 was shrunk by 4MB 00:04:41.835 EAL: Trying to obtain current memory policy. 00:04:41.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.835 EAL: Restoring previous memory policy: 4 00:04:41.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.835 EAL: request: mp_malloc_sync 00:04:41.835 EAL: No shared files mode enabled, IPC is disabled 00:04:41.835 EAL: Heap on socket 0 was expanded by 6MB 00:04:41.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.835 EAL: request: mp_malloc_sync 00:04:41.835 EAL: No shared files mode enabled, IPC is disabled 00:04:41.835 EAL: Heap on socket 0 was shrunk by 6MB 00:04:41.835 EAL: Trying to obtain current memory policy. 00:04:41.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.835 EAL: Restoring previous memory policy: 4 00:04:41.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.835 EAL: request: mp_malloc_sync 00:04:41.835 EAL: No shared files mode enabled, IPC is disabled 00:04:41.835 EAL: Heap on socket 0 was expanded by 10MB 00:04:41.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.835 EAL: request: mp_malloc_sync 00:04:41.835 EAL: No shared files mode enabled, IPC is disabled 00:04:41.835 EAL: Heap on socket 0 was shrunk by 10MB 00:04:41.835 EAL: Trying to obtain current memory policy. 00:04:41.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.835 EAL: Restoring previous memory policy: 4 00:04:41.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.835 EAL: request: mp_malloc_sync 00:04:41.835 EAL: No shared files mode enabled, IPC is disabled 00:04:41.835 EAL: Heap on socket 0 was expanded by 18MB 00:04:41.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.835 EAL: request: mp_malloc_sync 00:04:41.835 EAL: No shared files mode enabled, IPC is disabled 00:04:41.835 EAL: Heap on socket 0 was shrunk by 18MB 00:04:41.835 EAL: Trying to obtain current memory policy. 00:04:41.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.835 EAL: Restoring previous memory policy: 4 00:04:41.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.835 EAL: request: mp_malloc_sync 00:04:41.835 EAL: No shared files mode enabled, IPC is disabled 00:04:41.835 EAL: Heap on socket 0 was expanded by 34MB 00:04:41.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.835 EAL: request: mp_malloc_sync 00:04:41.835 EAL: No shared files mode enabled, IPC is disabled 00:04:41.835 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.097 EAL: Trying to obtain current memory policy. 00:04:42.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.097 EAL: Restoring previous memory policy: 4 00:04:42.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.097 EAL: request: mp_malloc_sync 00:04:42.097 EAL: No shared files mode enabled, IPC is disabled 00:04:42.097 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.097 EAL: request: mp_malloc_sync 00:04:42.097 EAL: No shared files mode enabled, IPC is disabled 00:04:42.097 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.097 EAL: Trying to obtain current memory policy. 00:04:42.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.097 EAL: Restoring previous memory policy: 4 00:04:42.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.097 EAL: request: mp_malloc_sync 00:04:42.097 EAL: No shared files mode enabled, IPC is disabled 00:04:42.097 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.357 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.357 EAL: request: mp_malloc_sync 00:04:42.357 EAL: No shared files mode enabled, IPC is disabled 00:04:42.357 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.357 EAL: Trying to obtain current memory policy. 00:04:42.357 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.357 EAL: Restoring previous memory policy: 4 00:04:42.358 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.358 EAL: request: mp_malloc_sync 00:04:42.358 EAL: No shared files mode enabled, IPC is disabled 00:04:42.358 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.925 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.925 EAL: request: mp_malloc_sync 00:04:42.925 EAL: No shared files mode enabled, IPC is disabled 00:04:42.925 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.926 EAL: Trying to obtain current memory policy. 00:04:42.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.185 EAL: Restoring previous memory policy: 4 00:04:43.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.185 EAL: request: mp_malloc_sync 00:04:43.185 EAL: No shared files mode enabled, IPC is disabled 00:04:43.185 EAL: Heap on socket 0 was expanded by 514MB 00:04:43.756 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.756 EAL: request: mp_malloc_sync 00:04:43.756 EAL: No shared files mode enabled, IPC is disabled 00:04:43.756 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.385 EAL: Trying to obtain current memory policy. 00:04:44.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.385 EAL: Restoring previous memory policy: 4 00:04:44.385 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.385 EAL: request: mp_malloc_sync 00:04:44.385 EAL: No shared files mode enabled, IPC is disabled 00:04:44.385 EAL: Heap on socket 0 was expanded by 1026MB 00:04:45.772 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.772 EAL: request: mp_malloc_sync 00:04:45.772 EAL: No shared files mode enabled, IPC is disabled 00:04:45.772 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:46.708 passed 00:04:46.708 00:04:46.708 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.708 suites 1 1 n/a 0 0 00:04:46.709 tests 2 2 2 0 0 00:04:46.709 asserts 5936 5936 5936 0 n/a 00:04:46.709 00:04:46.709 Elapsed time = 4.992 seconds 00:04:46.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.709 EAL: request: mp_malloc_sync 00:04:46.709 EAL: No shared files mode enabled, IPC is disabled 00:04:46.709 EAL: Heap on socket 0 was shrunk by 2MB 00:04:46.709 EAL: No shared files mode enabled, IPC is disabled 00:04:46.709 EAL: No shared files mode enabled, IPC is disabled 00:04:46.709 EAL: No shared files mode enabled, IPC is disabled 00:04:46.709 00:04:46.709 real 0m5.261s 00:04:46.709 user 0m4.447s 00:04:46.709 sys 0m0.664s 00:04:46.709 02:15:33 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.709 ************************************ 00:04:46.709 END TEST env_vtophys 00:04:46.709 ************************************ 00:04:46.709 02:15:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:46.709 02:15:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:46.709 02:15:33 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.709 02:15:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.709 02:15:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.709 ************************************ 00:04:46.709 START TEST env_pci 00:04:46.709 ************************************ 00:04:46.709 02:15:33 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:46.709 00:04:46.709 00:04:46.709 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.709 http://cunit.sourceforge.net/ 00:04:46.709 00:04:46.709 00:04:46.709 Suite: pci 00:04:46.709 Test: pci_hook ...[2024-11-04 02:15:33.665967] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56990 has claimed it 00:04:46.709 passed 00:04:46.709 00:04:46.709 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.709 suites 1 1 n/a 0 0 00:04:46.709 tests 1 1 1 0 0 00:04:46.709 asserts 25 25 25 0 n/a 00:04:46.709 00:04:46.709 Elapsed time = 0.007 seconds 00:04:46.709 EAL: Cannot find device (10000:00:01.0) 00:04:46.709 EAL: Failed to attach device on primary process 00:04:46.709 00:04:46.709 real 0m0.064s 00:04:46.709 user 0m0.033s 00:04:46.709 sys 0m0.031s 00:04:46.709 02:15:33 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.709 02:15:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:46.709 ************************************ 00:04:46.709 END TEST env_pci 00:04:46.709 ************************************ 00:04:46.709 02:15:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:46.709 02:15:33 env -- env/env.sh@15 -- # uname 00:04:46.709 02:15:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:46.709 02:15:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:46.709 02:15:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.709 02:15:33 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:46.709 02:15:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.709 02:15:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.709 ************************************ 00:04:46.709 START TEST env_dpdk_post_init 00:04:46.709 ************************************ 00:04:46.709 02:15:33 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.709 EAL: Detected CPU lcores: 10 00:04:46.709 EAL: Detected NUMA nodes: 1 00:04:46.709 EAL: Detected shared linkage of DPDK 00:04:46.970 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:46.970 EAL: Selected IOVA mode 'PA' 00:04:46.970 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:46.970 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:46.970 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:46.970 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:46.971 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:46.971 Starting DPDK initialization... 00:04:46.971 Starting SPDK post initialization... 00:04:46.971 SPDK NVMe probe 00:04:46.971 Attaching to 0000:00:10.0 00:04:46.971 Attaching to 0000:00:11.0 00:04:46.971 Attaching to 0000:00:12.0 00:04:46.971 Attaching to 0000:00:13.0 00:04:46.971 Attached to 0000:00:10.0 00:04:46.971 Attached to 0000:00:11.0 00:04:46.971 Attached to 0000:00:13.0 00:04:46.971 Attached to 0000:00:12.0 00:04:46.971 Cleaning up... 00:04:46.971 00:04:46.971 real 0m0.244s 00:04:46.971 user 0m0.083s 00:04:46.971 sys 0m0.063s 00:04:46.971 02:15:34 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.971 02:15:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:46.971 ************************************ 00:04:46.971 END TEST env_dpdk_post_init 00:04:46.971 ************************************ 00:04:46.971 02:15:34 env -- env/env.sh@26 -- # uname 00:04:46.971 02:15:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:46.971 02:15:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:46.971 02:15:34 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.971 02:15:34 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.971 02:15:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.971 ************************************ 00:04:46.971 START TEST env_mem_callbacks 00:04:46.971 ************************************ 00:04:46.971 02:15:34 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:47.230 EAL: Detected CPU lcores: 10 00:04:47.230 EAL: Detected NUMA nodes: 1 00:04:47.230 EAL: Detected shared linkage of DPDK 00:04:47.230 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:47.230 EAL: Selected IOVA mode 'PA' 00:04:47.230 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:47.230 00:04:47.230 00:04:47.230 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.230 http://cunit.sourceforge.net/ 00:04:47.230 00:04:47.230 00:04:47.230 Suite: memory 00:04:47.230 Test: test ... 00:04:47.230 register 0x200000200000 2097152 00:04:47.230 malloc 3145728 00:04:47.230 register 0x200000400000 4194304 00:04:47.230 buf 0x2000004fffc0 len 3145728 PASSED 00:04:47.230 malloc 64 00:04:47.230 buf 0x2000004ffec0 len 64 PASSED 00:04:47.230 malloc 4194304 00:04:47.230 register 0x200000800000 6291456 00:04:47.230 buf 0x2000009fffc0 len 4194304 PASSED 00:04:47.230 free 0x2000004fffc0 3145728 00:04:47.230 free 0x2000004ffec0 64 00:04:47.230 unregister 0x200000400000 4194304 PASSED 00:04:47.230 free 0x2000009fffc0 4194304 00:04:47.230 unregister 0x200000800000 6291456 PASSED 00:04:47.230 malloc 8388608 00:04:47.230 register 0x200000400000 10485760 00:04:47.230 buf 0x2000005fffc0 len 8388608 PASSED 00:04:47.230 free 0x2000005fffc0 8388608 00:04:47.230 unregister 0x200000400000 10485760 PASSED 00:04:47.230 passed 00:04:47.230 00:04:47.230 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.230 suites 1 1 n/a 0 0 00:04:47.230 tests 1 1 1 0 0 00:04:47.230 asserts 15 15 15 0 n/a 00:04:47.230 00:04:47.230 Elapsed time = 0.046 seconds 00:04:47.230 00:04:47.230 real 0m0.214s 00:04:47.230 user 0m0.064s 00:04:47.230 sys 0m0.047s 00:04:47.230 02:15:34 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.230 02:15:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:47.230 ************************************ 00:04:47.230 END TEST env_mem_callbacks 00:04:47.230 ************************************ 00:04:47.230 00:04:47.230 real 0m6.455s 00:04:47.230 user 0m5.011s 00:04:47.230 sys 0m1.035s 00:04:47.230 02:15:34 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.230 02:15:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.230 ************************************ 00:04:47.230 END TEST env 00:04:47.230 ************************************ 00:04:47.230 02:15:34 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:47.230 02:15:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.230 02:15:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.230 02:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:47.489 ************************************ 00:04:47.489 START TEST rpc 00:04:47.489 ************************************ 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:47.489 * Looking for test storage... 00:04:47.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.489 02:15:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.489 02:15:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.489 02:15:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.489 02:15:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.489 02:15:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.489 02:15:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.489 02:15:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.489 02:15:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.489 02:15:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.489 02:15:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.489 02:15:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.489 02:15:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:47.489 02:15:34 rpc -- scripts/common.sh@345 -- # : 1 00:04:47.489 02:15:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.489 02:15:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.489 02:15:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:47.489 02:15:34 rpc -- scripts/common.sh@353 -- # local d=1 00:04:47.489 02:15:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.489 02:15:34 rpc -- scripts/common.sh@355 -- # echo 1 00:04:47.489 02:15:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.489 02:15:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:47.489 02:15:34 rpc -- scripts/common.sh@353 -- # local d=2 00:04:47.489 02:15:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.489 02:15:34 rpc -- scripts/common.sh@355 -- # echo 2 00:04:47.489 02:15:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.489 02:15:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.489 02:15:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.489 02:15:34 rpc -- scripts/common.sh@368 -- # return 0 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.489 --rc genhtml_branch_coverage=1 00:04:47.489 --rc genhtml_function_coverage=1 00:04:47.489 --rc genhtml_legend=1 00:04:47.489 --rc geninfo_all_blocks=1 00:04:47.489 --rc geninfo_unexecuted_blocks=1 00:04:47.489 00:04:47.489 ' 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.489 --rc genhtml_branch_coverage=1 00:04:47.489 --rc genhtml_function_coverage=1 00:04:47.489 --rc genhtml_legend=1 00:04:47.489 --rc geninfo_all_blocks=1 00:04:47.489 --rc geninfo_unexecuted_blocks=1 00:04:47.489 00:04:47.489 ' 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.489 --rc genhtml_branch_coverage=1 00:04:47.489 --rc genhtml_function_coverage=1 00:04:47.489 --rc genhtml_legend=1 00:04:47.489 --rc geninfo_all_blocks=1 00:04:47.489 --rc geninfo_unexecuted_blocks=1 00:04:47.489 00:04:47.489 ' 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.489 --rc genhtml_branch_coverage=1 00:04:47.489 --rc genhtml_function_coverage=1 00:04:47.489 --rc genhtml_legend=1 00:04:47.489 --rc geninfo_all_blocks=1 00:04:47.489 --rc geninfo_unexecuted_blocks=1 00:04:47.489 00:04:47.489 ' 00:04:47.489 02:15:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57117 00:04:47.489 02:15:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.489 02:15:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57117 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@833 -- # '[' -z 57117 ']' 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.489 02:15:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.489 02:15:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.489 [2024-11-04 02:15:34.554501] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:47.489 [2024-11-04 02:15:34.554608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57117 ] 00:04:47.748 [2024-11-04 02:15:34.714158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.748 [2024-11-04 02:15:34.808407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:47.748 [2024-11-04 02:15:34.808451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57117' to capture a snapshot of events at runtime. 00:04:47.748 [2024-11-04 02:15:34.808461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:47.748 [2024-11-04 02:15:34.808470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:47.748 [2024-11-04 02:15:34.808478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57117 for offline analysis/debug. 00:04:47.748 [2024-11-04 02:15:34.809314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.315 02:15:35 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.315 02:15:35 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:48.315 02:15:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.315 02:15:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.315 02:15:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:48.315 02:15:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:48.315 02:15:35 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.315 02:15:35 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.315 02:15:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.315 ************************************ 00:04:48.315 START TEST rpc_integrity 00:04:48.315 ************************************ 00:04:48.315 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:48.315 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:48.315 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.315 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.315 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.315 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:48.315 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:48.574 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:48.574 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.574 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:48.574 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.574 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:48.574 { 00:04:48.574 "name": "Malloc0", 00:04:48.574 "aliases": [ 00:04:48.574 "ab6b0476-2f21-42c5-8cf9-660fc0d9aae4" 00:04:48.574 ], 00:04:48.574 "product_name": "Malloc disk", 00:04:48.574 "block_size": 512, 00:04:48.574 "num_blocks": 16384, 00:04:48.574 "uuid": "ab6b0476-2f21-42c5-8cf9-660fc0d9aae4", 00:04:48.574 "assigned_rate_limits": { 00:04:48.574 "rw_ios_per_sec": 0, 00:04:48.574 "rw_mbytes_per_sec": 0, 00:04:48.574 "r_mbytes_per_sec": 0, 00:04:48.574 "w_mbytes_per_sec": 0 00:04:48.574 }, 00:04:48.574 "claimed": false, 00:04:48.574 "zoned": false, 00:04:48.574 "supported_io_types": { 00:04:48.574 "read": true, 00:04:48.574 "write": true, 00:04:48.574 "unmap": true, 00:04:48.574 "flush": true, 00:04:48.574 "reset": true, 00:04:48.574 "nvme_admin": false, 00:04:48.574 "nvme_io": false, 00:04:48.574 "nvme_io_md": false, 00:04:48.574 "write_zeroes": true, 00:04:48.574 "zcopy": true, 00:04:48.574 "get_zone_info": false, 00:04:48.574 "zone_management": false, 00:04:48.574 "zone_append": false, 00:04:48.574 "compare": false, 00:04:48.574 "compare_and_write": false, 00:04:48.574 "abort": true, 00:04:48.574 "seek_hole": false, 00:04:48.574 "seek_data": false, 00:04:48.574 "copy": true, 00:04:48.574 "nvme_iov_md": false 00:04:48.574 }, 00:04:48.574 "memory_domains": [ 00:04:48.574 { 00:04:48.574 "dma_device_id": "system", 00:04:48.574 "dma_device_type": 1 00:04:48.574 }, 00:04:48.574 { 00:04:48.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.574 "dma_device_type": 2 00:04:48.574 } 00:04:48.574 ], 00:04:48.574 "driver_specific": {} 00:04:48.574 } 00:04:48.574 ]' 00:04:48.574 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:48.574 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:48.574 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 [2024-11-04 02:15:35.509267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:48.574 [2024-11-04 02:15:35.509327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:48.574 [2024-11-04 02:15:35.509354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:48.574 [2024-11-04 02:15:35.509366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:48.574 [2024-11-04 02:15:35.511508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:48.574 [2024-11-04 02:15:35.511547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:48.574 Passthru0 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.574 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.574 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:48.574 { 00:04:48.574 "name": "Malloc0", 00:04:48.574 "aliases": [ 00:04:48.574 "ab6b0476-2f21-42c5-8cf9-660fc0d9aae4" 00:04:48.574 ], 00:04:48.574 "product_name": "Malloc disk", 00:04:48.574 "block_size": 512, 00:04:48.574 "num_blocks": 16384, 00:04:48.574 "uuid": "ab6b0476-2f21-42c5-8cf9-660fc0d9aae4", 00:04:48.574 "assigned_rate_limits": { 00:04:48.574 "rw_ios_per_sec": 0, 00:04:48.575 "rw_mbytes_per_sec": 0, 00:04:48.575 "r_mbytes_per_sec": 0, 00:04:48.575 "w_mbytes_per_sec": 0 00:04:48.575 }, 00:04:48.575 "claimed": true, 00:04:48.575 "claim_type": "exclusive_write", 00:04:48.575 "zoned": false, 00:04:48.575 "supported_io_types": { 00:04:48.575 "read": true, 00:04:48.575 "write": true, 00:04:48.575 "unmap": true, 00:04:48.575 "flush": true, 00:04:48.575 "reset": true, 00:04:48.575 "nvme_admin": false, 00:04:48.575 "nvme_io": false, 00:04:48.575 "nvme_io_md": false, 00:04:48.575 "write_zeroes": true, 00:04:48.575 "zcopy": true, 00:04:48.575 "get_zone_info": false, 00:04:48.575 "zone_management": false, 00:04:48.575 "zone_append": false, 00:04:48.575 "compare": false, 00:04:48.575 "compare_and_write": false, 00:04:48.575 "abort": true, 00:04:48.575 "seek_hole": false, 00:04:48.575 "seek_data": false, 00:04:48.575 "copy": true, 00:04:48.575 "nvme_iov_md": false 00:04:48.575 }, 00:04:48.575 "memory_domains": [ 00:04:48.575 { 00:04:48.575 "dma_device_id": "system", 00:04:48.575 "dma_device_type": 1 00:04:48.575 }, 00:04:48.575 { 00:04:48.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.575 "dma_device_type": 2 00:04:48.575 } 00:04:48.575 ], 00:04:48.575 "driver_specific": {} 00:04:48.575 }, 00:04:48.575 { 00:04:48.575 "name": "Passthru0", 00:04:48.575 "aliases": [ 00:04:48.575 "3ff28522-aace-5e7f-960c-7a310b3b2090" 00:04:48.575 ], 00:04:48.575 "product_name": "passthru", 00:04:48.575 "block_size": 512, 00:04:48.575 "num_blocks": 16384, 00:04:48.575 "uuid": "3ff28522-aace-5e7f-960c-7a310b3b2090", 00:04:48.575 "assigned_rate_limits": { 00:04:48.575 "rw_ios_per_sec": 0, 00:04:48.575 "rw_mbytes_per_sec": 0, 00:04:48.575 "r_mbytes_per_sec": 0, 00:04:48.575 "w_mbytes_per_sec": 0 00:04:48.575 }, 00:04:48.575 "claimed": false, 00:04:48.575 "zoned": false, 00:04:48.575 "supported_io_types": { 00:04:48.575 "read": true, 00:04:48.575 "write": true, 00:04:48.575 "unmap": true, 00:04:48.575 "flush": true, 00:04:48.575 "reset": true, 00:04:48.575 "nvme_admin": false, 00:04:48.575 "nvme_io": false, 00:04:48.575 "nvme_io_md": false, 00:04:48.575 "write_zeroes": true, 00:04:48.575 "zcopy": true, 00:04:48.575 "get_zone_info": false, 00:04:48.575 "zone_management": false, 00:04:48.575 "zone_append": false, 00:04:48.575 "compare": false, 00:04:48.575 "compare_and_write": false, 00:04:48.575 "abort": true, 00:04:48.575 "seek_hole": false, 00:04:48.575 "seek_data": false, 00:04:48.575 "copy": true, 00:04:48.575 "nvme_iov_md": false 00:04:48.575 }, 00:04:48.575 "memory_domains": [ 00:04:48.575 { 00:04:48.575 "dma_device_id": "system", 00:04:48.575 "dma_device_type": 1 00:04:48.575 }, 00:04:48.575 { 00:04:48.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.575 "dma_device_type": 2 00:04:48.575 } 00:04:48.575 ], 00:04:48.575 "driver_specific": { 00:04:48.575 "passthru": { 00:04:48.575 "name": "Passthru0", 00:04:48.575 "base_bdev_name": "Malloc0" 00:04:48.575 } 00:04:48.575 } 00:04:48.575 } 00:04:48.575 ]' 00:04:48.575 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:48.575 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:48.575 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:48.575 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.575 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.575 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.575 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:48.575 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.575 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.575 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.575 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:48.575 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.575 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.575 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.575 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:48.575 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:48.575 02:15:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:48.575 00:04:48.575 real 0m0.248s 00:04:48.575 user 0m0.129s 00:04:48.575 sys 0m0.038s 00:04:48.575 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.575 02:15:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.575 ************************************ 00:04:48.575 END TEST rpc_integrity 00:04:48.575 ************************************ 00:04:48.575 02:15:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:48.575 02:15:35 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.575 02:15:35 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.575 02:15:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.575 ************************************ 00:04:48.575 START TEST rpc_plugins 00:04:48.575 ************************************ 00:04:48.575 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:48.834 02:15:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.834 02:15:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:48.834 02:15:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.834 02:15:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:48.834 { 00:04:48.834 "name": "Malloc1", 00:04:48.834 "aliases": [ 00:04:48.834 "cb512206-8a7b-47b8-985d-8cc093a5d7f6" 00:04:48.834 ], 00:04:48.834 "product_name": "Malloc disk", 00:04:48.834 "block_size": 4096, 00:04:48.834 "num_blocks": 256, 00:04:48.834 "uuid": "cb512206-8a7b-47b8-985d-8cc093a5d7f6", 00:04:48.834 "assigned_rate_limits": { 00:04:48.834 "rw_ios_per_sec": 0, 00:04:48.834 "rw_mbytes_per_sec": 0, 00:04:48.834 "r_mbytes_per_sec": 0, 00:04:48.834 "w_mbytes_per_sec": 0 00:04:48.834 }, 00:04:48.834 "claimed": false, 00:04:48.834 "zoned": false, 00:04:48.834 "supported_io_types": { 00:04:48.834 "read": true, 00:04:48.834 "write": true, 00:04:48.834 "unmap": true, 00:04:48.834 "flush": true, 00:04:48.834 "reset": true, 00:04:48.834 "nvme_admin": false, 00:04:48.834 "nvme_io": false, 00:04:48.834 "nvme_io_md": false, 00:04:48.834 "write_zeroes": true, 00:04:48.834 "zcopy": true, 00:04:48.834 "get_zone_info": false, 00:04:48.834 "zone_management": false, 00:04:48.834 "zone_append": false, 00:04:48.834 "compare": false, 00:04:48.834 "compare_and_write": false, 00:04:48.834 "abort": true, 00:04:48.834 "seek_hole": false, 00:04:48.834 "seek_data": false, 00:04:48.834 "copy": true, 00:04:48.834 "nvme_iov_md": false 00:04:48.834 }, 00:04:48.834 "memory_domains": [ 00:04:48.834 { 00:04:48.834 "dma_device_id": "system", 00:04:48.834 "dma_device_type": 1 00:04:48.834 }, 00:04:48.834 { 00:04:48.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.834 "dma_device_type": 2 00:04:48.834 } 00:04:48.834 ], 00:04:48.834 "driver_specific": {} 00:04:48.834 } 00:04:48.834 ]' 00:04:48.834 02:15:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:48.834 02:15:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:48.834 02:15:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.834 02:15:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.834 02:15:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:48.834 02:15:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:48.834 02:15:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:48.834 00:04:48.834 real 0m0.118s 00:04:48.834 user 0m0.065s 00:04:48.834 sys 0m0.018s 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.834 02:15:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 ************************************ 00:04:48.834 END TEST rpc_plugins 00:04:48.834 ************************************ 00:04:48.834 02:15:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:48.834 02:15:35 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.834 02:15:35 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.834 02:15:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 ************************************ 00:04:48.834 START TEST rpc_trace_cmd_test 00:04:48.834 ************************************ 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:48.834 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57117", 00:04:48.834 "tpoint_group_mask": "0x8", 00:04:48.834 "iscsi_conn": { 00:04:48.834 "mask": "0x2", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "scsi": { 00:04:48.834 "mask": "0x4", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "bdev": { 00:04:48.834 "mask": "0x8", 00:04:48.834 "tpoint_mask": "0xffffffffffffffff" 00:04:48.834 }, 00:04:48.834 "nvmf_rdma": { 00:04:48.834 "mask": "0x10", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "nvmf_tcp": { 00:04:48.834 "mask": "0x20", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "ftl": { 00:04:48.834 "mask": "0x40", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "blobfs": { 00:04:48.834 "mask": "0x80", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "dsa": { 00:04:48.834 "mask": "0x200", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "thread": { 00:04:48.834 "mask": "0x400", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "nvme_pcie": { 00:04:48.834 "mask": "0x800", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "iaa": { 00:04:48.834 "mask": "0x1000", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "nvme_tcp": { 00:04:48.834 "mask": "0x2000", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "bdev_nvme": { 00:04:48.834 "mask": "0x4000", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "sock": { 00:04:48.834 "mask": "0x8000", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "blob": { 00:04:48.834 "mask": "0x10000", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "bdev_raid": { 00:04:48.834 "mask": "0x20000", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 }, 00:04:48.834 "scheduler": { 00:04:48.834 "mask": "0x40000", 00:04:48.834 "tpoint_mask": "0x0" 00:04:48.834 } 00:04:48.834 }' 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:48.834 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:48.835 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:49.093 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:49.093 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:49.093 02:15:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:49.093 00:04:49.093 real 0m0.159s 00:04:49.093 user 0m0.128s 00:04:49.093 sys 0m0.023s 00:04:49.093 02:15:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.093 02:15:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:49.093 ************************************ 00:04:49.093 END TEST rpc_trace_cmd_test 00:04:49.093 ************************************ 00:04:49.093 02:15:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:49.093 02:15:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:49.093 02:15:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:49.093 02:15:36 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.094 02:15:36 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.094 02:15:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.094 ************************************ 00:04:49.094 START TEST rpc_daemon_integrity 00:04:49.094 ************************************ 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.094 { 00:04:49.094 "name": "Malloc2", 00:04:49.094 "aliases": [ 00:04:49.094 "3852dd80-f630-4a43-9313-77ff301a244f" 00:04:49.094 ], 00:04:49.094 "product_name": "Malloc disk", 00:04:49.094 "block_size": 512, 00:04:49.094 "num_blocks": 16384, 00:04:49.094 "uuid": "3852dd80-f630-4a43-9313-77ff301a244f", 00:04:49.094 "assigned_rate_limits": { 00:04:49.094 "rw_ios_per_sec": 0, 00:04:49.094 "rw_mbytes_per_sec": 0, 00:04:49.094 "r_mbytes_per_sec": 0, 00:04:49.094 "w_mbytes_per_sec": 0 00:04:49.094 }, 00:04:49.094 "claimed": false, 00:04:49.094 "zoned": false, 00:04:49.094 "supported_io_types": { 00:04:49.094 "read": true, 00:04:49.094 "write": true, 00:04:49.094 "unmap": true, 00:04:49.094 "flush": true, 00:04:49.094 "reset": true, 00:04:49.094 "nvme_admin": false, 00:04:49.094 "nvme_io": false, 00:04:49.094 "nvme_io_md": false, 00:04:49.094 "write_zeroes": true, 00:04:49.094 "zcopy": true, 00:04:49.094 "get_zone_info": false, 00:04:49.094 "zone_management": false, 00:04:49.094 "zone_append": false, 00:04:49.094 "compare": false, 00:04:49.094 "compare_and_write": false, 00:04:49.094 "abort": true, 00:04:49.094 "seek_hole": false, 00:04:49.094 "seek_data": false, 00:04:49.094 "copy": true, 00:04:49.094 "nvme_iov_md": false 00:04:49.094 }, 00:04:49.094 "memory_domains": [ 00:04:49.094 { 00:04:49.094 "dma_device_id": "system", 00:04:49.094 "dma_device_type": 1 00:04:49.094 }, 00:04:49.094 { 00:04:49.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.094 "dma_device_type": 2 00:04:49.094 } 00:04:49.094 ], 00:04:49.094 "driver_specific": {} 00:04:49.094 } 00:04:49.094 ]' 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.094 [2024-11-04 02:15:36.147284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:49.094 [2024-11-04 02:15:36.147330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:49.094 [2024-11-04 02:15:36.147349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:49.094 [2024-11-04 02:15:36.147359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:49.094 [2024-11-04 02:15:36.149426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:49.094 [2024-11-04 02:15:36.149464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:49.094 Passthru0 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.094 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:49.094 { 00:04:49.094 "name": "Malloc2", 00:04:49.094 "aliases": [ 00:04:49.094 "3852dd80-f630-4a43-9313-77ff301a244f" 00:04:49.094 ], 00:04:49.094 "product_name": "Malloc disk", 00:04:49.094 "block_size": 512, 00:04:49.094 "num_blocks": 16384, 00:04:49.094 "uuid": "3852dd80-f630-4a43-9313-77ff301a244f", 00:04:49.094 "assigned_rate_limits": { 00:04:49.094 "rw_ios_per_sec": 0, 00:04:49.094 "rw_mbytes_per_sec": 0, 00:04:49.094 "r_mbytes_per_sec": 0, 00:04:49.094 "w_mbytes_per_sec": 0 00:04:49.094 }, 00:04:49.094 "claimed": true, 00:04:49.094 "claim_type": "exclusive_write", 00:04:49.094 "zoned": false, 00:04:49.094 "supported_io_types": { 00:04:49.094 "read": true, 00:04:49.094 "write": true, 00:04:49.094 "unmap": true, 00:04:49.094 "flush": true, 00:04:49.094 "reset": true, 00:04:49.094 "nvme_admin": false, 00:04:49.094 "nvme_io": false, 00:04:49.094 "nvme_io_md": false, 00:04:49.094 "write_zeroes": true, 00:04:49.094 "zcopy": true, 00:04:49.094 "get_zone_info": false, 00:04:49.094 "zone_management": false, 00:04:49.094 "zone_append": false, 00:04:49.094 "compare": false, 00:04:49.094 "compare_and_write": false, 00:04:49.094 "abort": true, 00:04:49.094 "seek_hole": false, 00:04:49.094 "seek_data": false, 00:04:49.094 "copy": true, 00:04:49.094 "nvme_iov_md": false 00:04:49.094 }, 00:04:49.094 "memory_domains": [ 00:04:49.094 { 00:04:49.094 "dma_device_id": "system", 00:04:49.094 "dma_device_type": 1 00:04:49.094 }, 00:04:49.094 { 00:04:49.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.094 "dma_device_type": 2 00:04:49.094 } 00:04:49.094 ], 00:04:49.094 "driver_specific": {} 00:04:49.094 }, 00:04:49.094 { 00:04:49.094 "name": "Passthru0", 00:04:49.094 "aliases": [ 00:04:49.094 "aced2d09-a2cf-58ab-9a9d-3125b37670dd" 00:04:49.094 ], 00:04:49.094 "product_name": "passthru", 00:04:49.094 "block_size": 512, 00:04:49.094 "num_blocks": 16384, 00:04:49.094 "uuid": "aced2d09-a2cf-58ab-9a9d-3125b37670dd", 00:04:49.094 "assigned_rate_limits": { 00:04:49.094 "rw_ios_per_sec": 0, 00:04:49.094 "rw_mbytes_per_sec": 0, 00:04:49.094 "r_mbytes_per_sec": 0, 00:04:49.094 "w_mbytes_per_sec": 0 00:04:49.094 }, 00:04:49.094 "claimed": false, 00:04:49.094 "zoned": false, 00:04:49.094 "supported_io_types": { 00:04:49.094 "read": true, 00:04:49.094 "write": true, 00:04:49.094 "unmap": true, 00:04:49.094 "flush": true, 00:04:49.094 "reset": true, 00:04:49.094 "nvme_admin": false, 00:04:49.094 "nvme_io": false, 00:04:49.094 "nvme_io_md": false, 00:04:49.094 "write_zeroes": true, 00:04:49.094 "zcopy": true, 00:04:49.094 "get_zone_info": false, 00:04:49.094 "zone_management": false, 00:04:49.094 "zone_append": false, 00:04:49.094 "compare": false, 00:04:49.094 "compare_and_write": false, 00:04:49.094 "abort": true, 00:04:49.094 "seek_hole": false, 00:04:49.094 "seek_data": false, 00:04:49.094 "copy": true, 00:04:49.094 "nvme_iov_md": false 00:04:49.094 }, 00:04:49.094 "memory_domains": [ 00:04:49.094 { 00:04:49.094 "dma_device_id": "system", 00:04:49.094 "dma_device_type": 1 00:04:49.094 }, 00:04:49.094 { 00:04:49.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.094 "dma_device_type": 2 00:04:49.094 } 00:04:49.094 ], 00:04:49.094 "driver_specific": { 00:04:49.094 "passthru": { 00:04:49.094 "name": "Passthru0", 00:04:49.094 "base_bdev_name": "Malloc2" 00:04:49.094 } 00:04:49.094 } 00:04:49.094 } 00:04:49.094 ]' 00:04:49.095 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:49.095 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.095 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.095 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.095 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.353 00:04:49.353 real 0m0.240s 00:04:49.353 user 0m0.127s 00:04:49.353 sys 0m0.034s 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.353 ************************************ 00:04:49.353 END TEST rpc_daemon_integrity 00:04:49.353 02:15:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.353 ************************************ 00:04:49.353 02:15:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:49.353 02:15:36 rpc -- rpc/rpc.sh@84 -- # killprocess 57117 00:04:49.353 02:15:36 rpc -- common/autotest_common.sh@952 -- # '[' -z 57117 ']' 00:04:49.353 02:15:36 rpc -- common/autotest_common.sh@956 -- # kill -0 57117 00:04:49.353 02:15:36 rpc -- common/autotest_common.sh@957 -- # uname 00:04:49.353 02:15:36 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.353 02:15:36 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57117 00:04:49.353 02:15:36 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.353 02:15:36 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.353 killing process with pid 57117 00:04:49.353 02:15:36 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57117' 00:04:49.353 02:15:36 rpc -- common/autotest_common.sh@971 -- # kill 57117 00:04:49.353 02:15:36 rpc -- common/autotest_common.sh@976 -- # wait 57117 00:04:50.769 00:04:50.769 real 0m3.390s 00:04:50.769 user 0m3.829s 00:04:50.769 sys 0m0.568s 00:04:50.769 02:15:37 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:50.769 ************************************ 00:04:50.769 END TEST rpc 00:04:50.769 ************************************ 00:04:50.769 02:15:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.769 02:15:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:50.769 02:15:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.769 02:15:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.769 02:15:37 -- common/autotest_common.sh@10 -- # set +x 00:04:50.769 ************************************ 00:04:50.769 START TEST skip_rpc 00:04:50.769 ************************************ 00:04:50.769 02:15:37 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:50.769 * Looking for test storage... 00:04:50.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.769 02:15:37 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.769 02:15:37 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.769 02:15:37 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:51.029 02:15:37 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.029 02:15:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:51.029 02:15:37 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.029 02:15:37 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:51.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.029 --rc genhtml_branch_coverage=1 00:04:51.029 --rc genhtml_function_coverage=1 00:04:51.029 --rc genhtml_legend=1 00:04:51.029 --rc geninfo_all_blocks=1 00:04:51.029 --rc geninfo_unexecuted_blocks=1 00:04:51.029 00:04:51.029 ' 00:04:51.029 02:15:37 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:51.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.029 --rc genhtml_branch_coverage=1 00:04:51.029 --rc genhtml_function_coverage=1 00:04:51.029 --rc genhtml_legend=1 00:04:51.029 --rc geninfo_all_blocks=1 00:04:51.029 --rc geninfo_unexecuted_blocks=1 00:04:51.029 00:04:51.029 ' 00:04:51.029 02:15:37 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:51.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.029 --rc genhtml_branch_coverage=1 00:04:51.029 --rc genhtml_function_coverage=1 00:04:51.029 --rc genhtml_legend=1 00:04:51.029 --rc geninfo_all_blocks=1 00:04:51.029 --rc geninfo_unexecuted_blocks=1 00:04:51.029 00:04:51.029 ' 00:04:51.029 02:15:37 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:51.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.029 --rc genhtml_branch_coverage=1 00:04:51.029 --rc genhtml_function_coverage=1 00:04:51.029 --rc genhtml_legend=1 00:04:51.029 --rc geninfo_all_blocks=1 00:04:51.029 --rc geninfo_unexecuted_blocks=1 00:04:51.029 00:04:51.029 ' 00:04:51.029 02:15:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.029 02:15:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:51.029 02:15:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:51.029 02:15:37 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.029 02:15:37 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.029 02:15:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.029 ************************************ 00:04:51.029 START TEST skip_rpc 00:04:51.029 ************************************ 00:04:51.029 02:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:51.029 02:15:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57330 00:04:51.029 02:15:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.029 02:15:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:51.029 02:15:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:51.029 [2024-11-04 02:15:38.026922] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:51.029 [2024-11-04 02:15:38.027041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57330 ] 00:04:51.287 [2024-11-04 02:15:38.189088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.287 [2024-11-04 02:15:38.286421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.555 02:15:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:56.555 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57330 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57330 ']' 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57330 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57330 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57330' 00:04:56.556 killing process with pid 57330 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57330 00:04:56.556 02:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57330 00:04:57.135 00:04:57.135 real 0m6.193s 00:04:57.135 user 0m5.826s 00:04:57.135 sys 0m0.261s 00:04:57.135 02:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.135 ************************************ 00:04:57.135 END TEST skip_rpc 00:04:57.135 ************************************ 00:04:57.135 02:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.135 02:15:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:57.135 02:15:44 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.135 02:15:44 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.135 02:15:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.135 ************************************ 00:04:57.135 START TEST skip_rpc_with_json 00:04:57.135 ************************************ 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57423 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57423 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57423 ']' 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.135 02:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.430 [2024-11-04 02:15:44.247641] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:04:57.430 [2024-11-04 02:15:44.247744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57423 ] 00:04:57.430 [2024-11-04 02:15:44.393684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.430 [2024-11-04 02:15:44.470011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.997 [2024-11-04 02:15:45.091822] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:57.997 request: 00:04:57.997 { 00:04:57.997 "trtype": "tcp", 00:04:57.997 "method": "nvmf_get_transports", 00:04:57.997 "req_id": 1 00:04:57.997 } 00:04:57.997 Got JSON-RPC error response 00:04:57.997 response: 00:04:57.997 { 00:04:57.997 "code": -19, 00:04:57.997 "message": "No such device" 00:04:57.997 } 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.997 [2024-11-04 02:15:45.099917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.997 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.256 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.256 02:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:58.256 { 00:04:58.256 "subsystems": [ 00:04:58.256 { 00:04:58.256 "subsystem": "fsdev", 00:04:58.256 "config": [ 00:04:58.256 { 00:04:58.256 "method": "fsdev_set_opts", 00:04:58.256 "params": { 00:04:58.256 "fsdev_io_pool_size": 65535, 00:04:58.256 "fsdev_io_cache_size": 256 00:04:58.256 } 00:04:58.256 } 00:04:58.256 ] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "keyring", 00:04:58.256 "config": [] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "iobuf", 00:04:58.256 "config": [ 00:04:58.256 { 00:04:58.256 "method": "iobuf_set_options", 00:04:58.256 "params": { 00:04:58.256 "small_pool_count": 8192, 00:04:58.256 "large_pool_count": 1024, 00:04:58.256 "small_bufsize": 8192, 00:04:58.256 "large_bufsize": 135168, 00:04:58.256 "enable_numa": false 00:04:58.256 } 00:04:58.256 } 00:04:58.256 ] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "sock", 00:04:58.256 "config": [ 00:04:58.256 { 00:04:58.256 "method": "sock_set_default_impl", 00:04:58.256 "params": { 00:04:58.256 "impl_name": "posix" 00:04:58.256 } 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "method": "sock_impl_set_options", 00:04:58.256 "params": { 00:04:58.256 "impl_name": "ssl", 00:04:58.256 "recv_buf_size": 4096, 00:04:58.256 "send_buf_size": 4096, 00:04:58.256 "enable_recv_pipe": true, 00:04:58.256 "enable_quickack": false, 00:04:58.256 "enable_placement_id": 0, 00:04:58.256 "enable_zerocopy_send_server": true, 00:04:58.256 "enable_zerocopy_send_client": false, 00:04:58.256 "zerocopy_threshold": 0, 00:04:58.256 "tls_version": 0, 00:04:58.256 "enable_ktls": false 00:04:58.256 } 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "method": "sock_impl_set_options", 00:04:58.256 "params": { 00:04:58.256 "impl_name": "posix", 00:04:58.256 "recv_buf_size": 2097152, 00:04:58.256 "send_buf_size": 2097152, 00:04:58.256 "enable_recv_pipe": true, 00:04:58.256 "enable_quickack": false, 00:04:58.256 "enable_placement_id": 0, 00:04:58.256 "enable_zerocopy_send_server": true, 00:04:58.256 "enable_zerocopy_send_client": false, 00:04:58.256 "zerocopy_threshold": 0, 00:04:58.256 "tls_version": 0, 00:04:58.256 "enable_ktls": false 00:04:58.256 } 00:04:58.256 } 00:04:58.256 ] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "vmd", 00:04:58.256 "config": [] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "accel", 00:04:58.256 "config": [ 00:04:58.256 { 00:04:58.256 "method": "accel_set_options", 00:04:58.256 "params": { 00:04:58.256 "small_cache_size": 128, 00:04:58.256 "large_cache_size": 16, 00:04:58.256 "task_count": 2048, 00:04:58.256 "sequence_count": 2048, 00:04:58.256 "buf_count": 2048 00:04:58.256 } 00:04:58.256 } 00:04:58.256 ] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "bdev", 00:04:58.256 "config": [ 00:04:58.256 { 00:04:58.256 "method": "bdev_set_options", 00:04:58.256 "params": { 00:04:58.256 "bdev_io_pool_size": 65535, 00:04:58.256 "bdev_io_cache_size": 256, 00:04:58.256 "bdev_auto_examine": true, 00:04:58.256 "iobuf_small_cache_size": 128, 00:04:58.256 "iobuf_large_cache_size": 16 00:04:58.256 } 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "method": "bdev_raid_set_options", 00:04:58.256 "params": { 00:04:58.256 "process_window_size_kb": 1024, 00:04:58.256 "process_max_bandwidth_mb_sec": 0 00:04:58.256 } 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "method": "bdev_iscsi_set_options", 00:04:58.256 "params": { 00:04:58.256 "timeout_sec": 30 00:04:58.256 } 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "method": "bdev_nvme_set_options", 00:04:58.256 "params": { 00:04:58.256 "action_on_timeout": "none", 00:04:58.256 "timeout_us": 0, 00:04:58.256 "timeout_admin_us": 0, 00:04:58.256 "keep_alive_timeout_ms": 10000, 00:04:58.256 "arbitration_burst": 0, 00:04:58.256 "low_priority_weight": 0, 00:04:58.256 "medium_priority_weight": 0, 00:04:58.256 "high_priority_weight": 0, 00:04:58.256 "nvme_adminq_poll_period_us": 10000, 00:04:58.256 "nvme_ioq_poll_period_us": 0, 00:04:58.256 "io_queue_requests": 0, 00:04:58.256 "delay_cmd_submit": true, 00:04:58.256 "transport_retry_count": 4, 00:04:58.256 "bdev_retry_count": 3, 00:04:58.256 "transport_ack_timeout": 0, 00:04:58.256 "ctrlr_loss_timeout_sec": 0, 00:04:58.256 "reconnect_delay_sec": 0, 00:04:58.256 "fast_io_fail_timeout_sec": 0, 00:04:58.256 "disable_auto_failback": false, 00:04:58.256 "generate_uuids": false, 00:04:58.256 "transport_tos": 0, 00:04:58.256 "nvme_error_stat": false, 00:04:58.256 "rdma_srq_size": 0, 00:04:58.256 "io_path_stat": false, 00:04:58.256 "allow_accel_sequence": false, 00:04:58.256 "rdma_max_cq_size": 0, 00:04:58.256 "rdma_cm_event_timeout_ms": 0, 00:04:58.256 "dhchap_digests": [ 00:04:58.256 "sha256", 00:04:58.256 "sha384", 00:04:58.256 "sha512" 00:04:58.256 ], 00:04:58.256 "dhchap_dhgroups": [ 00:04:58.256 "null", 00:04:58.256 "ffdhe2048", 00:04:58.256 "ffdhe3072", 00:04:58.256 "ffdhe4096", 00:04:58.256 "ffdhe6144", 00:04:58.256 "ffdhe8192" 00:04:58.256 ] 00:04:58.256 } 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "method": "bdev_nvme_set_hotplug", 00:04:58.256 "params": { 00:04:58.256 "period_us": 100000, 00:04:58.256 "enable": false 00:04:58.256 } 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "method": "bdev_wait_for_examine" 00:04:58.256 } 00:04:58.256 ] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "scsi", 00:04:58.256 "config": null 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "scheduler", 00:04:58.256 "config": [ 00:04:58.256 { 00:04:58.256 "method": "framework_set_scheduler", 00:04:58.256 "params": { 00:04:58.256 "name": "static" 00:04:58.256 } 00:04:58.256 } 00:04:58.256 ] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "vhost_scsi", 00:04:58.256 "config": [] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "vhost_blk", 00:04:58.256 "config": [] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "ublk", 00:04:58.256 "config": [] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "nbd", 00:04:58.256 "config": [] 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "subsystem": "nvmf", 00:04:58.256 "config": [ 00:04:58.256 { 00:04:58.256 "method": "nvmf_set_config", 00:04:58.256 "params": { 00:04:58.256 "discovery_filter": "match_any", 00:04:58.256 "admin_cmd_passthru": { 00:04:58.256 "identify_ctrlr": false 00:04:58.256 }, 00:04:58.256 "dhchap_digests": [ 00:04:58.256 "sha256", 00:04:58.256 "sha384", 00:04:58.256 "sha512" 00:04:58.256 ], 00:04:58.256 "dhchap_dhgroups": [ 00:04:58.256 "null", 00:04:58.256 "ffdhe2048", 00:04:58.256 "ffdhe3072", 00:04:58.256 "ffdhe4096", 00:04:58.256 "ffdhe6144", 00:04:58.256 "ffdhe8192" 00:04:58.256 ] 00:04:58.256 } 00:04:58.256 }, 00:04:58.256 { 00:04:58.256 "method": "nvmf_set_max_subsystems", 00:04:58.256 "params": { 00:04:58.257 "max_subsystems": 1024 00:04:58.257 } 00:04:58.257 }, 00:04:58.257 { 00:04:58.257 "method": "nvmf_set_crdt", 00:04:58.257 "params": { 00:04:58.257 "crdt1": 0, 00:04:58.257 "crdt2": 0, 00:04:58.257 "crdt3": 0 00:04:58.257 } 00:04:58.257 }, 00:04:58.257 { 00:04:58.257 "method": "nvmf_create_transport", 00:04:58.257 "params": { 00:04:58.257 "trtype": "TCP", 00:04:58.257 "max_queue_depth": 128, 00:04:58.257 "max_io_qpairs_per_ctrlr": 127, 00:04:58.257 "in_capsule_data_size": 4096, 00:04:58.257 "max_io_size": 131072, 00:04:58.257 "io_unit_size": 131072, 00:04:58.257 "max_aq_depth": 128, 00:04:58.257 "num_shared_buffers": 511, 00:04:58.257 "buf_cache_size": 4294967295, 00:04:58.257 "dif_insert_or_strip": false, 00:04:58.257 "zcopy": false, 00:04:58.257 "c2h_success": true, 00:04:58.257 "sock_priority": 0, 00:04:58.257 "abort_timeout_sec": 1, 00:04:58.257 "ack_timeout": 0, 00:04:58.257 "data_wr_pool_size": 0 00:04:58.257 } 00:04:58.257 } 00:04:58.257 ] 00:04:58.257 }, 00:04:58.257 { 00:04:58.257 "subsystem": "iscsi", 00:04:58.257 "config": [ 00:04:58.257 { 00:04:58.257 "method": "iscsi_set_options", 00:04:58.257 "params": { 00:04:58.257 "node_base": "iqn.2016-06.io.spdk", 00:04:58.257 "max_sessions": 128, 00:04:58.257 "max_connections_per_session": 2, 00:04:58.257 "max_queue_depth": 64, 00:04:58.257 "default_time2wait": 2, 00:04:58.257 "default_time2retain": 20, 00:04:58.257 "first_burst_length": 8192, 00:04:58.257 "immediate_data": true, 00:04:58.257 "allow_duplicated_isid": false, 00:04:58.257 "error_recovery_level": 0, 00:04:58.257 "nop_timeout": 60, 00:04:58.257 "nop_in_interval": 30, 00:04:58.257 "disable_chap": false, 00:04:58.257 "require_chap": false, 00:04:58.257 "mutual_chap": false, 00:04:58.257 "chap_group": 0, 00:04:58.257 "max_large_datain_per_connection": 64, 00:04:58.257 "max_r2t_per_connection": 4, 00:04:58.257 "pdu_pool_size": 36864, 00:04:58.257 "immediate_data_pool_size": 16384, 00:04:58.257 "data_out_pool_size": 2048 00:04:58.257 } 00:04:58.257 } 00:04:58.257 ] 00:04:58.257 } 00:04:58.257 ] 00:04:58.257 } 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57423 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57423 ']' 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57423 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57423 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:58.257 killing process with pid 57423 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57423' 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57423 00:04:58.257 02:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57423 00:04:59.632 02:15:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57462 00:04:59.632 02:15:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:59.632 02:15:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.896 02:15:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57462 00:05:04.896 02:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57462 ']' 00:05:04.896 02:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57462 00:05:04.896 02:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:04.896 02:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:04.896 02:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57462 00:05:04.896 02:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:04.896 killing process with pid 57462 00:05:04.896 02:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:04.896 02:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57462' 00:05:04.896 02:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57462 00:05:04.896 02:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57462 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:05.830 00:05:05.830 real 0m8.433s 00:05:05.830 user 0m8.122s 00:05:05.830 sys 0m0.541s 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.830 ************************************ 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.830 END TEST skip_rpc_with_json 00:05:05.830 ************************************ 00:05:05.830 02:15:52 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:05.830 02:15:52 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.830 02:15:52 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.830 02:15:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.830 ************************************ 00:05:05.830 START TEST skip_rpc_with_delay 00:05:05.830 ************************************ 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.830 [2024-11-04 02:15:52.722562] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:05.830 00:05:05.830 real 0m0.127s 00:05:05.830 user 0m0.067s 00:05:05.830 sys 0m0.059s 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.830 02:15:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:05.830 ************************************ 00:05:05.830 END TEST skip_rpc_with_delay 00:05:05.830 ************************************ 00:05:05.830 02:15:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:05.830 02:15:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:05.830 02:15:52 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:05.830 02:15:52 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.830 02:15:52 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.830 02:15:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.830 ************************************ 00:05:05.830 START TEST exit_on_failed_rpc_init 00:05:05.830 ************************************ 00:05:05.830 02:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:05.830 02:15:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57579 00:05:05.830 02:15:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57579 00:05:05.830 02:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57579 ']' 00:05:05.830 02:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.830 02:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.831 02:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.831 02:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.831 02:15:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.831 02:15:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.831 [2024-11-04 02:15:52.887277] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:05.831 [2024-11-04 02:15:52.887405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57579 ] 00:05:06.089 [2024-11-04 02:15:53.045496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.089 [2024-11-04 02:15:53.125881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:06.656 02:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.914 [2024-11-04 02:15:53.798860] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:06.914 [2024-11-04 02:15:53.798989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57597 ] 00:05:06.914 [2024-11-04 02:15:53.957675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.173 [2024-11-04 02:15:54.053280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.173 [2024-11-04 02:15:54.053358] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:07.173 [2024-11-04 02:15:54.053371] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:07.173 [2024-11-04 02:15:54.053383] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57579 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57579 ']' 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57579 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57579 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:07.173 killing process with pid 57579 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57579' 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57579 00:05:07.173 02:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57579 00:05:08.548 00:05:08.548 real 0m2.561s 00:05:08.548 user 0m2.874s 00:05:08.548 sys 0m0.384s 00:05:08.548 02:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.548 02:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.548 ************************************ 00:05:08.548 END TEST exit_on_failed_rpc_init 00:05:08.548 ************************************ 00:05:08.548 02:15:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:08.548 00:05:08.548 real 0m17.606s 00:05:08.548 user 0m17.014s 00:05:08.548 sys 0m1.404s 00:05:08.548 02:15:55 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.548 02:15:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.548 ************************************ 00:05:08.548 END TEST skip_rpc 00:05:08.548 ************************************ 00:05:08.548 02:15:55 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:08.548 02:15:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.548 02:15:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.548 02:15:55 -- common/autotest_common.sh@10 -- # set +x 00:05:08.548 ************************************ 00:05:08.548 START TEST rpc_client 00:05:08.548 ************************************ 00:05:08.548 02:15:55 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:08.548 * Looking for test storage... 00:05:08.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:08.548 02:15:55 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.548 02:15:55 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.548 02:15:55 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.548 02:15:55 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.548 02:15:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:08.548 02:15:55 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.548 02:15:55 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.548 --rc genhtml_branch_coverage=1 00:05:08.548 --rc genhtml_function_coverage=1 00:05:08.548 --rc genhtml_legend=1 00:05:08.548 --rc geninfo_all_blocks=1 00:05:08.548 --rc geninfo_unexecuted_blocks=1 00:05:08.548 00:05:08.548 ' 00:05:08.548 02:15:55 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.548 --rc genhtml_branch_coverage=1 00:05:08.548 --rc genhtml_function_coverage=1 00:05:08.548 --rc genhtml_legend=1 00:05:08.548 --rc geninfo_all_blocks=1 00:05:08.548 --rc geninfo_unexecuted_blocks=1 00:05:08.548 00:05:08.548 ' 00:05:08.548 02:15:55 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.549 --rc genhtml_branch_coverage=1 00:05:08.549 --rc genhtml_function_coverage=1 00:05:08.549 --rc genhtml_legend=1 00:05:08.549 --rc geninfo_all_blocks=1 00:05:08.549 --rc geninfo_unexecuted_blocks=1 00:05:08.549 00:05:08.549 ' 00:05:08.549 02:15:55 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.549 --rc genhtml_branch_coverage=1 00:05:08.549 --rc genhtml_function_coverage=1 00:05:08.549 --rc genhtml_legend=1 00:05:08.549 --rc geninfo_all_blocks=1 00:05:08.549 --rc geninfo_unexecuted_blocks=1 00:05:08.549 00:05:08.549 ' 00:05:08.549 02:15:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:08.549 OK 00:05:08.549 02:15:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:08.549 00:05:08.549 real 0m0.171s 00:05:08.549 user 0m0.099s 00:05:08.549 sys 0m0.080s 00:05:08.549 02:15:55 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.549 02:15:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:08.549 ************************************ 00:05:08.549 END TEST rpc_client 00:05:08.549 ************************************ 00:05:08.549 02:15:55 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:08.549 02:15:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.549 02:15:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.549 02:15:55 -- common/autotest_common.sh@10 -- # set +x 00:05:08.549 ************************************ 00:05:08.549 START TEST json_config 00:05:08.549 ************************************ 00:05:08.549 02:15:55 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:08.807 02:15:55 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.807 02:15:55 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.807 02:15:55 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.807 02:15:55 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.807 02:15:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.807 02:15:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.807 02:15:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.807 02:15:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.807 02:15:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.807 02:15:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.807 02:15:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.807 02:15:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.807 02:15:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.807 02:15:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.807 02:15:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.807 02:15:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:08.807 02:15:55 json_config -- scripts/common.sh@345 -- # : 1 00:05:08.807 02:15:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.807 02:15:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.807 02:15:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:08.807 02:15:55 json_config -- scripts/common.sh@353 -- # local d=1 00:05:08.807 02:15:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.807 02:15:55 json_config -- scripts/common.sh@355 -- # echo 1 00:05:08.807 02:15:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.807 02:15:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:08.807 02:15:55 json_config -- scripts/common.sh@353 -- # local d=2 00:05:08.807 02:15:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.807 02:15:55 json_config -- scripts/common.sh@355 -- # echo 2 00:05:08.807 02:15:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.807 02:15:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.807 02:15:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.807 02:15:55 json_config -- scripts/common.sh@368 -- # return 0 00:05:08.808 02:15:55 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.808 02:15:55 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.808 --rc genhtml_branch_coverage=1 00:05:08.808 --rc genhtml_function_coverage=1 00:05:08.808 --rc genhtml_legend=1 00:05:08.808 --rc geninfo_all_blocks=1 00:05:08.808 --rc geninfo_unexecuted_blocks=1 00:05:08.808 00:05:08.808 ' 00:05:08.808 02:15:55 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.808 --rc genhtml_branch_coverage=1 00:05:08.808 --rc genhtml_function_coverage=1 00:05:08.808 --rc genhtml_legend=1 00:05:08.808 --rc geninfo_all_blocks=1 00:05:08.808 --rc geninfo_unexecuted_blocks=1 00:05:08.808 00:05:08.808 ' 00:05:08.808 02:15:55 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.808 --rc genhtml_branch_coverage=1 00:05:08.808 --rc genhtml_function_coverage=1 00:05:08.808 --rc genhtml_legend=1 00:05:08.808 --rc geninfo_all_blocks=1 00:05:08.808 --rc geninfo_unexecuted_blocks=1 00:05:08.808 00:05:08.808 ' 00:05:08.808 02:15:55 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.808 --rc genhtml_branch_coverage=1 00:05:08.808 --rc genhtml_function_coverage=1 00:05:08.808 --rc genhtml_legend=1 00:05:08.808 --rc geninfo_all_blocks=1 00:05:08.808 --rc geninfo_unexecuted_blocks=1 00:05:08.808 00:05:08.808 ' 00:05:08.808 02:15:55 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:518f79be-9b30-4da0-8ec1-5dca1ca1a8ef 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=518f79be-9b30-4da0-8ec1-5dca1ca1a8ef 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:08.808 02:15:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:08.808 02:15:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.808 02:15:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.808 02:15:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.808 02:15:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.808 02:15:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.808 02:15:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.808 02:15:55 json_config -- paths/export.sh@5 -- # export PATH 00:05:08.808 02:15:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@51 -- # : 0 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:08.808 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:08.808 02:15:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:08.808 02:15:55 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:08.808 02:15:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:08.808 02:15:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:08.808 02:15:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:08.808 02:15:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:08.808 WARNING: No tests are enabled so not running JSON configuration tests 00:05:08.808 02:15:55 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:08.808 02:15:55 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:08.808 00:05:08.808 real 0m0.148s 00:05:08.808 user 0m0.086s 00:05:08.808 sys 0m0.057s 00:05:08.808 02:15:55 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.808 02:15:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.808 ************************************ 00:05:08.808 END TEST json_config 00:05:08.808 ************************************ 00:05:08.808 02:15:55 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:08.808 02:15:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.808 02:15:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.808 02:15:55 -- common/autotest_common.sh@10 -- # set +x 00:05:08.808 ************************************ 00:05:08.808 START TEST json_config_extra_key 00:05:08.808 ************************************ 00:05:08.808 02:15:55 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:08.808 02:15:55 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.808 02:15:55 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.808 02:15:55 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:09.081 02:15:55 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:09.081 02:15:55 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.081 02:15:55 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:09.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.081 --rc genhtml_branch_coverage=1 00:05:09.081 --rc genhtml_function_coverage=1 00:05:09.081 --rc genhtml_legend=1 00:05:09.081 --rc geninfo_all_blocks=1 00:05:09.081 --rc geninfo_unexecuted_blocks=1 00:05:09.081 00:05:09.081 ' 00:05:09.081 02:15:55 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:09.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.081 --rc genhtml_branch_coverage=1 00:05:09.081 --rc genhtml_function_coverage=1 00:05:09.081 --rc genhtml_legend=1 00:05:09.081 --rc geninfo_all_blocks=1 00:05:09.081 --rc geninfo_unexecuted_blocks=1 00:05:09.081 00:05:09.081 ' 00:05:09.081 02:15:55 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:09.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.081 --rc genhtml_branch_coverage=1 00:05:09.081 --rc genhtml_function_coverage=1 00:05:09.081 --rc genhtml_legend=1 00:05:09.081 --rc geninfo_all_blocks=1 00:05:09.081 --rc geninfo_unexecuted_blocks=1 00:05:09.081 00:05:09.081 ' 00:05:09.081 02:15:55 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:09.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.081 --rc genhtml_branch_coverage=1 00:05:09.081 --rc genhtml_function_coverage=1 00:05:09.081 --rc genhtml_legend=1 00:05:09.081 --rc geninfo_all_blocks=1 00:05:09.081 --rc geninfo_unexecuted_blocks=1 00:05:09.081 00:05:09.081 ' 00:05:09.081 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:518f79be-9b30-4da0-8ec1-5dca1ca1a8ef 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=518f79be-9b30-4da0-8ec1-5dca1ca1a8ef 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.081 02:15:55 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.081 02:15:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.081 02:15:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.081 02:15:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.081 02:15:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:09.081 02:15:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.081 02:15:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.082 02:15:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.082 02:15:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.082 02:15:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.082 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.082 02:15:55 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.082 02:15:55 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.082 02:15:55 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.082 INFO: launching applications... 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:09.082 02:15:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:09.082 02:15:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:09.082 02:15:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:09.082 02:15:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.082 02:15:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.082 02:15:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.082 02:15:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.082 02:15:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.082 02:15:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57785 00:05:09.082 Waiting for target to run... 00:05:09.082 02:15:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.082 02:15:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57785 /var/tmp/spdk_tgt.sock 00:05:09.082 02:15:55 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57785 ']' 00:05:09.082 02:15:55 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.082 02:15:55 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.082 02:15:55 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.082 02:15:55 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.082 02:15:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:09.082 02:15:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.082 [2024-11-04 02:15:56.017267] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:09.082 [2024-11-04 02:15:56.017364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57785 ] 00:05:09.340 [2024-11-04 02:15:56.321372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.340 [2024-11-04 02:15:56.411137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.906 02:15:56 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:09.906 00:05:09.906 02:15:56 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:09.906 02:15:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:09.906 INFO: shutting down applications... 00:05:09.906 02:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:09.906 02:15:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:09.906 02:15:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:09.906 02:15:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.906 02:15:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57785 ]] 00:05:09.906 02:15:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57785 00:05:09.906 02:15:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.906 02:15:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.906 02:15:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57785 00:05:09.906 02:15:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.472 02:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.472 02:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.472 02:15:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57785 00:05:10.472 02:15:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.038 02:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.038 02:15:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.038 02:15:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57785 00:05:11.038 02:15:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.602 02:15:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.602 02:15:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.602 02:15:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57785 00:05:11.602 02:15:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.861 02:15:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.861 02:15:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.861 02:15:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57785 00:05:11.861 02:15:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.861 02:15:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:11.861 02:15:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.861 SPDK target shutdown done 00:05:11.861 02:15:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.861 Success 00:05:11.861 02:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:11.861 00:05:11.861 real 0m3.091s 00:05:11.861 user 0m2.711s 00:05:11.861 sys 0m0.374s 00:05:11.861 02:15:58 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.861 02:15:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.861 ************************************ 00:05:11.861 END TEST json_config_extra_key 00:05:11.861 ************************************ 00:05:11.861 02:15:58 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.861 02:15:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:11.861 02:15:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.861 02:15:58 -- common/autotest_common.sh@10 -- # set +x 00:05:11.861 ************************************ 00:05:11.861 START TEST alias_rpc 00:05:11.861 ************************************ 00:05:11.861 02:15:58 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.119 * Looking for test storage... 00:05:12.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:12.119 02:15:59 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.119 02:15:59 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.119 02:15:59 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.119 02:15:59 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.119 02:15:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.120 02:15:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:12.120 02:15:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.120 02:15:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:12.120 02:15:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:12.120 02:15:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.120 02:15:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:12.120 02:15:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.120 02:15:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.120 02:15:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.120 02:15:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:12.120 02:15:59 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.120 02:15:59 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.120 --rc genhtml_branch_coverage=1 00:05:12.120 --rc genhtml_function_coverage=1 00:05:12.120 --rc genhtml_legend=1 00:05:12.120 --rc geninfo_all_blocks=1 00:05:12.120 --rc geninfo_unexecuted_blocks=1 00:05:12.120 00:05:12.120 ' 00:05:12.120 02:15:59 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.120 --rc genhtml_branch_coverage=1 00:05:12.120 --rc genhtml_function_coverage=1 00:05:12.120 --rc genhtml_legend=1 00:05:12.120 --rc geninfo_all_blocks=1 00:05:12.120 --rc geninfo_unexecuted_blocks=1 00:05:12.120 00:05:12.120 ' 00:05:12.120 02:15:59 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.120 --rc genhtml_branch_coverage=1 00:05:12.120 --rc genhtml_function_coverage=1 00:05:12.120 --rc genhtml_legend=1 00:05:12.120 --rc geninfo_all_blocks=1 00:05:12.120 --rc geninfo_unexecuted_blocks=1 00:05:12.120 00:05:12.120 ' 00:05:12.120 02:15:59 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.120 --rc genhtml_branch_coverage=1 00:05:12.120 --rc genhtml_function_coverage=1 00:05:12.120 --rc genhtml_legend=1 00:05:12.120 --rc geninfo_all_blocks=1 00:05:12.120 --rc geninfo_unexecuted_blocks=1 00:05:12.120 00:05:12.120 ' 00:05:12.120 02:15:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:12.120 02:15:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57878 00:05:12.120 02:15:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57878 00:05:12.120 02:15:59 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57878 ']' 00:05:12.120 02:15:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.120 02:15:59 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.120 02:15:59 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.120 02:15:59 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.120 02:15:59 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.120 02:15:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.120 [2024-11-04 02:15:59.147341] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:12.120 [2024-11-04 02:15:59.147434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57878 ] 00:05:12.378 [2024-11-04 02:15:59.302184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.378 [2024-11-04 02:15:59.395477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.947 02:15:59 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:12.947 02:15:59 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:12.947 02:15:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:13.205 02:16:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57878 00:05:13.205 02:16:00 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57878 ']' 00:05:13.205 02:16:00 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57878 00:05:13.205 02:16:00 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:13.205 02:16:00 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:13.205 02:16:00 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57878 00:05:13.205 02:16:00 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:13.205 killing process with pid 57878 00:05:13.205 02:16:00 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:13.205 02:16:00 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57878' 00:05:13.205 02:16:00 alias_rpc -- common/autotest_common.sh@971 -- # kill 57878 00:05:13.205 02:16:00 alias_rpc -- common/autotest_common.sh@976 -- # wait 57878 00:05:15.132 00:05:15.132 real 0m2.764s 00:05:15.132 user 0m2.862s 00:05:15.132 sys 0m0.407s 00:05:15.132 02:16:01 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.132 02:16:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.132 ************************************ 00:05:15.132 END TEST alias_rpc 00:05:15.132 ************************************ 00:05:15.132 02:16:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:15.132 02:16:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.132 02:16:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.132 02:16:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.132 02:16:01 -- common/autotest_common.sh@10 -- # set +x 00:05:15.132 ************************************ 00:05:15.132 START TEST spdkcli_tcp 00:05:15.132 ************************************ 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.132 * Looking for test storage... 00:05:15.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.132 02:16:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.132 --rc genhtml_branch_coverage=1 00:05:15.132 --rc genhtml_function_coverage=1 00:05:15.132 --rc genhtml_legend=1 00:05:15.132 --rc geninfo_all_blocks=1 00:05:15.132 --rc geninfo_unexecuted_blocks=1 00:05:15.132 00:05:15.132 ' 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.132 --rc genhtml_branch_coverage=1 00:05:15.132 --rc genhtml_function_coverage=1 00:05:15.132 --rc genhtml_legend=1 00:05:15.132 --rc geninfo_all_blocks=1 00:05:15.132 --rc geninfo_unexecuted_blocks=1 00:05:15.132 00:05:15.132 ' 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.132 --rc genhtml_branch_coverage=1 00:05:15.132 --rc genhtml_function_coverage=1 00:05:15.132 --rc genhtml_legend=1 00:05:15.132 --rc geninfo_all_blocks=1 00:05:15.132 --rc geninfo_unexecuted_blocks=1 00:05:15.132 00:05:15.132 ' 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.132 --rc genhtml_branch_coverage=1 00:05:15.132 --rc genhtml_function_coverage=1 00:05:15.132 --rc genhtml_legend=1 00:05:15.132 --rc geninfo_all_blocks=1 00:05:15.132 --rc geninfo_unexecuted_blocks=1 00:05:15.132 00:05:15.132 ' 00:05:15.132 02:16:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:15.132 02:16:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:15.132 02:16:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:15.132 02:16:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:15.132 02:16:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:15.132 02:16:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:15.132 02:16:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.132 02:16:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57974 00:05:15.132 02:16:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57974 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57974 ']' 00:05:15.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.132 02:16:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.132 02:16:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:15.132 [2024-11-04 02:16:01.984046] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:15.132 [2024-11-04 02:16:01.984450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57974 ] 00:05:15.132 [2024-11-04 02:16:02.141313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.132 [2024-11-04 02:16:02.238048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.132 [2024-11-04 02:16:02.238156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.069 02:16:02 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.069 02:16:02 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:16.069 02:16:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:16.069 02:16:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57986 00:05:16.069 02:16:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:16.069 [ 00:05:16.070 "bdev_malloc_delete", 00:05:16.070 "bdev_malloc_create", 00:05:16.070 "bdev_null_resize", 00:05:16.070 "bdev_null_delete", 00:05:16.070 "bdev_null_create", 00:05:16.070 "bdev_nvme_cuse_unregister", 00:05:16.070 "bdev_nvme_cuse_register", 00:05:16.070 "bdev_opal_new_user", 00:05:16.070 "bdev_opal_set_lock_state", 00:05:16.070 "bdev_opal_delete", 00:05:16.070 "bdev_opal_get_info", 00:05:16.070 "bdev_opal_create", 00:05:16.070 "bdev_nvme_opal_revert", 00:05:16.070 "bdev_nvme_opal_init", 00:05:16.070 "bdev_nvme_send_cmd", 00:05:16.070 "bdev_nvme_set_keys", 00:05:16.070 "bdev_nvme_get_path_iostat", 00:05:16.070 "bdev_nvme_get_mdns_discovery_info", 00:05:16.070 "bdev_nvme_stop_mdns_discovery", 00:05:16.070 "bdev_nvme_start_mdns_discovery", 00:05:16.070 "bdev_nvme_set_multipath_policy", 00:05:16.070 "bdev_nvme_set_preferred_path", 00:05:16.070 "bdev_nvme_get_io_paths", 00:05:16.070 "bdev_nvme_remove_error_injection", 00:05:16.070 "bdev_nvme_add_error_injection", 00:05:16.070 "bdev_nvme_get_discovery_info", 00:05:16.070 "bdev_nvme_stop_discovery", 00:05:16.070 "bdev_nvme_start_discovery", 00:05:16.070 "bdev_nvme_get_controller_health_info", 00:05:16.070 "bdev_nvme_disable_controller", 00:05:16.070 "bdev_nvme_enable_controller", 00:05:16.070 "bdev_nvme_reset_controller", 00:05:16.070 "bdev_nvme_get_transport_statistics", 00:05:16.070 "bdev_nvme_apply_firmware", 00:05:16.070 "bdev_nvme_detach_controller", 00:05:16.070 "bdev_nvme_get_controllers", 00:05:16.070 "bdev_nvme_attach_controller", 00:05:16.070 "bdev_nvme_set_hotplug", 00:05:16.070 "bdev_nvme_set_options", 00:05:16.070 "bdev_passthru_delete", 00:05:16.070 "bdev_passthru_create", 00:05:16.070 "bdev_lvol_set_parent_bdev", 00:05:16.070 "bdev_lvol_set_parent", 00:05:16.070 "bdev_lvol_check_shallow_copy", 00:05:16.070 "bdev_lvol_start_shallow_copy", 00:05:16.070 "bdev_lvol_grow_lvstore", 00:05:16.070 "bdev_lvol_get_lvols", 00:05:16.070 "bdev_lvol_get_lvstores", 00:05:16.070 "bdev_lvol_delete", 00:05:16.070 "bdev_lvol_set_read_only", 00:05:16.070 "bdev_lvol_resize", 00:05:16.070 "bdev_lvol_decouple_parent", 00:05:16.070 "bdev_lvol_inflate", 00:05:16.070 "bdev_lvol_rename", 00:05:16.070 "bdev_lvol_clone_bdev", 00:05:16.070 "bdev_lvol_clone", 00:05:16.070 "bdev_lvol_snapshot", 00:05:16.070 "bdev_lvol_create", 00:05:16.070 "bdev_lvol_delete_lvstore", 00:05:16.070 "bdev_lvol_rename_lvstore", 00:05:16.070 "bdev_lvol_create_lvstore", 00:05:16.070 "bdev_raid_set_options", 00:05:16.070 "bdev_raid_remove_base_bdev", 00:05:16.070 "bdev_raid_add_base_bdev", 00:05:16.070 "bdev_raid_delete", 00:05:16.070 "bdev_raid_create", 00:05:16.070 "bdev_raid_get_bdevs", 00:05:16.070 "bdev_error_inject_error", 00:05:16.070 "bdev_error_delete", 00:05:16.070 "bdev_error_create", 00:05:16.070 "bdev_split_delete", 00:05:16.070 "bdev_split_create", 00:05:16.070 "bdev_delay_delete", 00:05:16.070 "bdev_delay_create", 00:05:16.070 "bdev_delay_update_latency", 00:05:16.070 "bdev_zone_block_delete", 00:05:16.070 "bdev_zone_block_create", 00:05:16.070 "blobfs_create", 00:05:16.070 "blobfs_detect", 00:05:16.070 "blobfs_set_cache_size", 00:05:16.070 "bdev_xnvme_delete", 00:05:16.070 "bdev_xnvme_create", 00:05:16.070 "bdev_aio_delete", 00:05:16.070 "bdev_aio_rescan", 00:05:16.070 "bdev_aio_create", 00:05:16.070 "bdev_ftl_set_property", 00:05:16.070 "bdev_ftl_get_properties", 00:05:16.070 "bdev_ftl_get_stats", 00:05:16.070 "bdev_ftl_unmap", 00:05:16.070 "bdev_ftl_unload", 00:05:16.070 "bdev_ftl_delete", 00:05:16.070 "bdev_ftl_load", 00:05:16.070 "bdev_ftl_create", 00:05:16.070 "bdev_virtio_attach_controller", 00:05:16.070 "bdev_virtio_scsi_get_devices", 00:05:16.070 "bdev_virtio_detach_controller", 00:05:16.070 "bdev_virtio_blk_set_hotplug", 00:05:16.070 "bdev_iscsi_delete", 00:05:16.070 "bdev_iscsi_create", 00:05:16.070 "bdev_iscsi_set_options", 00:05:16.070 "accel_error_inject_error", 00:05:16.070 "ioat_scan_accel_module", 00:05:16.070 "dsa_scan_accel_module", 00:05:16.070 "iaa_scan_accel_module", 00:05:16.070 "keyring_file_remove_key", 00:05:16.070 "keyring_file_add_key", 00:05:16.070 "keyring_linux_set_options", 00:05:16.070 "fsdev_aio_delete", 00:05:16.070 "fsdev_aio_create", 00:05:16.070 "iscsi_get_histogram", 00:05:16.070 "iscsi_enable_histogram", 00:05:16.070 "iscsi_set_options", 00:05:16.070 "iscsi_get_auth_groups", 00:05:16.070 "iscsi_auth_group_remove_secret", 00:05:16.070 "iscsi_auth_group_add_secret", 00:05:16.070 "iscsi_delete_auth_group", 00:05:16.070 "iscsi_create_auth_group", 00:05:16.070 "iscsi_set_discovery_auth", 00:05:16.070 "iscsi_get_options", 00:05:16.070 "iscsi_target_node_request_logout", 00:05:16.070 "iscsi_target_node_set_redirect", 00:05:16.070 "iscsi_target_node_set_auth", 00:05:16.070 "iscsi_target_node_add_lun", 00:05:16.070 "iscsi_get_stats", 00:05:16.070 "iscsi_get_connections", 00:05:16.070 "iscsi_portal_group_set_auth", 00:05:16.070 "iscsi_start_portal_group", 00:05:16.070 "iscsi_delete_portal_group", 00:05:16.070 "iscsi_create_portal_group", 00:05:16.070 "iscsi_get_portal_groups", 00:05:16.070 "iscsi_delete_target_node", 00:05:16.070 "iscsi_target_node_remove_pg_ig_maps", 00:05:16.070 "iscsi_target_node_add_pg_ig_maps", 00:05:16.070 "iscsi_create_target_node", 00:05:16.070 "iscsi_get_target_nodes", 00:05:16.070 "iscsi_delete_initiator_group", 00:05:16.070 "iscsi_initiator_group_remove_initiators", 00:05:16.070 "iscsi_initiator_group_add_initiators", 00:05:16.070 "iscsi_create_initiator_group", 00:05:16.071 "iscsi_get_initiator_groups", 00:05:16.071 "nvmf_set_crdt", 00:05:16.071 "nvmf_set_config", 00:05:16.071 "nvmf_set_max_subsystems", 00:05:16.071 "nvmf_stop_mdns_prr", 00:05:16.071 "nvmf_publish_mdns_prr", 00:05:16.071 "nvmf_subsystem_get_listeners", 00:05:16.071 "nvmf_subsystem_get_qpairs", 00:05:16.071 "nvmf_subsystem_get_controllers", 00:05:16.071 "nvmf_get_stats", 00:05:16.071 "nvmf_get_transports", 00:05:16.071 "nvmf_create_transport", 00:05:16.071 "nvmf_get_targets", 00:05:16.071 "nvmf_delete_target", 00:05:16.071 "nvmf_create_target", 00:05:16.071 "nvmf_subsystem_allow_any_host", 00:05:16.071 "nvmf_subsystem_set_keys", 00:05:16.071 "nvmf_subsystem_remove_host", 00:05:16.071 "nvmf_subsystem_add_host", 00:05:16.071 "nvmf_ns_remove_host", 00:05:16.071 "nvmf_ns_add_host", 00:05:16.071 "nvmf_subsystem_remove_ns", 00:05:16.071 "nvmf_subsystem_set_ns_ana_group", 00:05:16.071 "nvmf_subsystem_add_ns", 00:05:16.071 "nvmf_subsystem_listener_set_ana_state", 00:05:16.071 "nvmf_discovery_get_referrals", 00:05:16.071 "nvmf_discovery_remove_referral", 00:05:16.071 "nvmf_discovery_add_referral", 00:05:16.071 "nvmf_subsystem_remove_listener", 00:05:16.071 "nvmf_subsystem_add_listener", 00:05:16.071 "nvmf_delete_subsystem", 00:05:16.071 "nvmf_create_subsystem", 00:05:16.071 "nvmf_get_subsystems", 00:05:16.071 "env_dpdk_get_mem_stats", 00:05:16.071 "nbd_get_disks", 00:05:16.071 "nbd_stop_disk", 00:05:16.071 "nbd_start_disk", 00:05:16.071 "ublk_recover_disk", 00:05:16.071 "ublk_get_disks", 00:05:16.071 "ublk_stop_disk", 00:05:16.071 "ublk_start_disk", 00:05:16.071 "ublk_destroy_target", 00:05:16.071 "ublk_create_target", 00:05:16.071 "virtio_blk_create_transport", 00:05:16.071 "virtio_blk_get_transports", 00:05:16.071 "vhost_controller_set_coalescing", 00:05:16.071 "vhost_get_controllers", 00:05:16.071 "vhost_delete_controller", 00:05:16.071 "vhost_create_blk_controller", 00:05:16.071 "vhost_scsi_controller_remove_target", 00:05:16.071 "vhost_scsi_controller_add_target", 00:05:16.071 "vhost_start_scsi_controller", 00:05:16.071 "vhost_create_scsi_controller", 00:05:16.071 "thread_set_cpumask", 00:05:16.071 "scheduler_set_options", 00:05:16.071 "framework_get_governor", 00:05:16.071 "framework_get_scheduler", 00:05:16.071 "framework_set_scheduler", 00:05:16.071 "framework_get_reactors", 00:05:16.071 "thread_get_io_channels", 00:05:16.071 "thread_get_pollers", 00:05:16.071 "thread_get_stats", 00:05:16.071 "framework_monitor_context_switch", 00:05:16.071 "spdk_kill_instance", 00:05:16.071 "log_enable_timestamps", 00:05:16.071 "log_get_flags", 00:05:16.071 "log_clear_flag", 00:05:16.071 "log_set_flag", 00:05:16.071 "log_get_level", 00:05:16.071 "log_set_level", 00:05:16.071 "log_get_print_level", 00:05:16.071 "log_set_print_level", 00:05:16.071 "framework_enable_cpumask_locks", 00:05:16.071 "framework_disable_cpumask_locks", 00:05:16.071 "framework_wait_init", 00:05:16.071 "framework_start_init", 00:05:16.071 "scsi_get_devices", 00:05:16.071 "bdev_get_histogram", 00:05:16.071 "bdev_enable_histogram", 00:05:16.071 "bdev_set_qos_limit", 00:05:16.071 "bdev_set_qd_sampling_period", 00:05:16.071 "bdev_get_bdevs", 00:05:16.071 "bdev_reset_iostat", 00:05:16.071 "bdev_get_iostat", 00:05:16.071 "bdev_examine", 00:05:16.071 "bdev_wait_for_examine", 00:05:16.071 "bdev_set_options", 00:05:16.071 "accel_get_stats", 00:05:16.071 "accel_set_options", 00:05:16.071 "accel_set_driver", 00:05:16.071 "accel_crypto_key_destroy", 00:05:16.071 "accel_crypto_keys_get", 00:05:16.071 "accel_crypto_key_create", 00:05:16.071 "accel_assign_opc", 00:05:16.071 "accel_get_module_info", 00:05:16.071 "accel_get_opc_assignments", 00:05:16.071 "vmd_rescan", 00:05:16.071 "vmd_remove_device", 00:05:16.071 "vmd_enable", 00:05:16.071 "sock_get_default_impl", 00:05:16.071 "sock_set_default_impl", 00:05:16.071 "sock_impl_set_options", 00:05:16.071 "sock_impl_get_options", 00:05:16.071 "iobuf_get_stats", 00:05:16.071 "iobuf_set_options", 00:05:16.071 "keyring_get_keys", 00:05:16.071 "framework_get_pci_devices", 00:05:16.071 "framework_get_config", 00:05:16.071 "framework_get_subsystems", 00:05:16.071 "fsdev_set_opts", 00:05:16.071 "fsdev_get_opts", 00:05:16.071 "trace_get_info", 00:05:16.071 "trace_get_tpoint_group_mask", 00:05:16.071 "trace_disable_tpoint_group", 00:05:16.071 "trace_enable_tpoint_group", 00:05:16.071 "trace_clear_tpoint_mask", 00:05:16.071 "trace_set_tpoint_mask", 00:05:16.071 "notify_get_notifications", 00:05:16.071 "notify_get_types", 00:05:16.071 "spdk_get_version", 00:05:16.071 "rpc_get_methods" 00:05:16.071 ] 00:05:16.071 02:16:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.071 02:16:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:16.071 02:16:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57974 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57974 ']' 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57974 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57974 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57974' 00:05:16.071 killing process with pid 57974 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57974 00:05:16.071 02:16:03 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57974 00:05:17.977 00:05:17.977 real 0m2.794s 00:05:17.977 user 0m4.992s 00:05:17.977 sys 0m0.437s 00:05:17.977 02:16:04 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.977 02:16:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.977 ************************************ 00:05:17.977 END TEST spdkcli_tcp 00:05:17.977 ************************************ 00:05:17.977 02:16:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.978 02:16:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.978 02:16:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.978 02:16:04 -- common/autotest_common.sh@10 -- # set +x 00:05:17.978 ************************************ 00:05:17.978 START TEST dpdk_mem_utility 00:05:17.978 ************************************ 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.978 * Looking for test storage... 00:05:17.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.978 02:16:04 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.978 --rc genhtml_branch_coverage=1 00:05:17.978 --rc genhtml_function_coverage=1 00:05:17.978 --rc genhtml_legend=1 00:05:17.978 --rc geninfo_all_blocks=1 00:05:17.978 --rc geninfo_unexecuted_blocks=1 00:05:17.978 00:05:17.978 ' 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.978 --rc genhtml_branch_coverage=1 00:05:17.978 --rc genhtml_function_coverage=1 00:05:17.978 --rc genhtml_legend=1 00:05:17.978 --rc geninfo_all_blocks=1 00:05:17.978 --rc geninfo_unexecuted_blocks=1 00:05:17.978 00:05:17.978 ' 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.978 --rc genhtml_branch_coverage=1 00:05:17.978 --rc genhtml_function_coverage=1 00:05:17.978 --rc genhtml_legend=1 00:05:17.978 --rc geninfo_all_blocks=1 00:05:17.978 --rc geninfo_unexecuted_blocks=1 00:05:17.978 00:05:17.978 ' 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.978 --rc genhtml_branch_coverage=1 00:05:17.978 --rc genhtml_function_coverage=1 00:05:17.978 --rc genhtml_legend=1 00:05:17.978 --rc geninfo_all_blocks=1 00:05:17.978 --rc geninfo_unexecuted_blocks=1 00:05:17.978 00:05:17.978 ' 00:05:17.978 02:16:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:17.978 02:16:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58080 00:05:17.978 02:16:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58080 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58080 ']' 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.978 02:16:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.978 02:16:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.978 [2024-11-04 02:16:04.823083] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:17.978 [2024-11-04 02:16:04.823208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58080 ] 00:05:17.978 [2024-11-04 02:16:04.981350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.240 [2024-11-04 02:16:05.098995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.809 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.809 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:18.809 02:16:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:18.809 02:16:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:18.809 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.809 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.809 { 00:05:18.809 "filename": "/tmp/spdk_mem_dump.txt" 00:05:18.809 } 00:05:18.809 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.809 02:16:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:18.809 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:18.809 1 heaps totaling size 816.000000 MiB 00:05:18.810 size: 816.000000 MiB heap id: 0 00:05:18.810 end heaps---------- 00:05:18.810 9 mempools totaling size 595.772034 MiB 00:05:18.810 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:18.810 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:18.810 size: 92.545471 MiB name: bdev_io_58080 00:05:18.810 size: 50.003479 MiB name: msgpool_58080 00:05:18.810 size: 36.509338 MiB name: fsdev_io_58080 00:05:18.810 size: 21.763794 MiB name: PDU_Pool 00:05:18.810 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:18.810 size: 4.133484 MiB name: evtpool_58080 00:05:18.810 size: 0.026123 MiB name: Session_Pool 00:05:18.810 end mempools------- 00:05:18.810 6 memzones totaling size 4.142822 MiB 00:05:18.810 size: 1.000366 MiB name: RG_ring_0_58080 00:05:18.810 size: 1.000366 MiB name: RG_ring_1_58080 00:05:18.810 size: 1.000366 MiB name: RG_ring_4_58080 00:05:18.810 size: 1.000366 MiB name: RG_ring_5_58080 00:05:18.810 size: 0.125366 MiB name: RG_ring_2_58080 00:05:18.810 size: 0.015991 MiB name: RG_ring_3_58080 00:05:18.810 end memzones------- 00:05:18.810 02:16:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:18.810 heap id: 0 total size: 816.000000 MiB number of busy elements: 323 number of free elements: 18 00:05:18.810 list of free elements. size: 16.789429 MiB 00:05:18.810 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:18.810 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:18.810 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:18.810 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:18.810 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:18.810 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:18.810 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:18.810 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:18.810 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:18.810 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:18.810 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:18.810 element at address: 0x20001ac00000 with size: 0.558777 MiB 00:05:18.810 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:18.810 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:18.810 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:18.810 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:18.810 element at address: 0x200028000000 with size: 0.391663 MiB 00:05:18.810 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:18.810 list of standard malloc elements. size: 199.289673 MiB 00:05:18.810 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:18.810 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:18.810 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:18.810 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:18.810 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:18.810 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:18.810 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:18.810 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:18.810 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:18.810 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:18.810 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:18.810 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:18.810 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:18.810 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:18.810 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:18.811 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:18.811 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8f0c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200028064440 with size: 0.000244 MiB 00:05:18.811 element at address: 0x200028064540 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20002806b200 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:18.811 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:18.812 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:18.812 list of memzone associated elements. size: 599.920898 MiB 00:05:18.812 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:18.812 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:18.812 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:18.812 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:18.812 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:18.812 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58080_0 00:05:18.812 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:18.812 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58080_0 00:05:18.812 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:18.812 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58080_0 00:05:18.812 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:18.812 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:18.812 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:18.812 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:18.812 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:18.812 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58080_0 00:05:18.812 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:18.812 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58080 00:05:18.812 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:18.812 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58080 00:05:18.812 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:18.812 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:18.812 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:18.812 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:18.812 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:18.812 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:18.812 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:18.812 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:18.812 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:18.812 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58080 00:05:18.812 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:18.812 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58080 00:05:18.812 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:18.812 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58080 00:05:18.812 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:18.812 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58080 00:05:18.812 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:18.812 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58080 00:05:18.812 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:18.812 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58080 00:05:18.812 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:18.812 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:18.812 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:18.812 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:18.812 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:18.812 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:18.812 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:18.812 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58080 00:05:18.812 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:18.812 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58080 00:05:18.812 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:18.812 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:18.812 element at address: 0x200028064640 with size: 0.023804 MiB 00:05:18.812 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:18.812 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:18.812 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58080 00:05:18.812 element at address: 0x20002806a7c0 with size: 0.002502 MiB 00:05:18.812 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:18.812 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:18.812 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58080 00:05:18.812 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:18.812 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58080 00:05:18.812 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:18.812 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58080 00:05:18.812 element at address: 0x20002806b300 with size: 0.000366 MiB 00:05:18.812 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:18.812 02:16:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:18.812 02:16:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58080 00:05:18.812 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58080 ']' 00:05:18.812 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58080 00:05:18.812 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:18.812 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.812 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58080 00:05:18.812 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:18.812 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:18.812 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58080' 00:05:18.812 killing process with pid 58080 00:05:18.812 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58080 00:05:18.813 02:16:05 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58080 00:05:20.187 00:05:20.187 real 0m2.536s 00:05:20.187 user 0m2.563s 00:05:20.187 sys 0m0.381s 00:05:20.187 02:16:07 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.187 02:16:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.187 ************************************ 00:05:20.187 END TEST dpdk_mem_utility 00:05:20.187 ************************************ 00:05:20.187 02:16:07 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:20.187 02:16:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.187 02:16:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.187 02:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:20.187 ************************************ 00:05:20.187 START TEST event 00:05:20.187 ************************************ 00:05:20.187 02:16:07 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:20.187 * Looking for test storage... 00:05:20.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:20.187 02:16:07 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:20.187 02:16:07 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:20.187 02:16:07 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:20.446 02:16:07 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:20.446 02:16:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.446 02:16:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.446 02:16:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.446 02:16:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.446 02:16:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.446 02:16:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.446 02:16:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.446 02:16:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.446 02:16:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.446 02:16:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.446 02:16:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.446 02:16:07 event -- scripts/common.sh@344 -- # case "$op" in 00:05:20.446 02:16:07 event -- scripts/common.sh@345 -- # : 1 00:05:20.446 02:16:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.446 02:16:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.446 02:16:07 event -- scripts/common.sh@365 -- # decimal 1 00:05:20.446 02:16:07 event -- scripts/common.sh@353 -- # local d=1 00:05:20.446 02:16:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.446 02:16:07 event -- scripts/common.sh@355 -- # echo 1 00:05:20.446 02:16:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.446 02:16:07 event -- scripts/common.sh@366 -- # decimal 2 00:05:20.446 02:16:07 event -- scripts/common.sh@353 -- # local d=2 00:05:20.446 02:16:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.446 02:16:07 event -- scripts/common.sh@355 -- # echo 2 00:05:20.446 02:16:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.446 02:16:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.446 02:16:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.446 02:16:07 event -- scripts/common.sh@368 -- # return 0 00:05:20.446 02:16:07 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.446 02:16:07 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:20.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.446 --rc genhtml_branch_coverage=1 00:05:20.446 --rc genhtml_function_coverage=1 00:05:20.446 --rc genhtml_legend=1 00:05:20.446 --rc geninfo_all_blocks=1 00:05:20.446 --rc geninfo_unexecuted_blocks=1 00:05:20.446 00:05:20.446 ' 00:05:20.446 02:16:07 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:20.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.446 --rc genhtml_branch_coverage=1 00:05:20.446 --rc genhtml_function_coverage=1 00:05:20.446 --rc genhtml_legend=1 00:05:20.446 --rc geninfo_all_blocks=1 00:05:20.446 --rc geninfo_unexecuted_blocks=1 00:05:20.446 00:05:20.446 ' 00:05:20.446 02:16:07 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:20.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.446 --rc genhtml_branch_coverage=1 00:05:20.446 --rc genhtml_function_coverage=1 00:05:20.446 --rc genhtml_legend=1 00:05:20.446 --rc geninfo_all_blocks=1 00:05:20.446 --rc geninfo_unexecuted_blocks=1 00:05:20.446 00:05:20.446 ' 00:05:20.446 02:16:07 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:20.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.446 --rc genhtml_branch_coverage=1 00:05:20.446 --rc genhtml_function_coverage=1 00:05:20.446 --rc genhtml_legend=1 00:05:20.446 --rc geninfo_all_blocks=1 00:05:20.446 --rc geninfo_unexecuted_blocks=1 00:05:20.446 00:05:20.446 ' 00:05:20.446 02:16:07 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:20.446 02:16:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:20.446 02:16:07 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.446 02:16:07 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:20.446 02:16:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.446 02:16:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.446 ************************************ 00:05:20.446 START TEST event_perf 00:05:20.446 ************************************ 00:05:20.446 02:16:07 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.446 Running I/O for 1 seconds...[2024-11-04 02:16:07.340720] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:20.446 [2024-11-04 02:16:07.340804] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58171 ] 00:05:20.446 [2024-11-04 02:16:07.495239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.705 [2024-11-04 02:16:07.599324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.705 [2024-11-04 02:16:07.599508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.705 Running I/O for 1 seconds...[2024-11-04 02:16:07.600086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.705 [2024-11-04 02:16:07.600171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.640 00:05:21.640 lcore 0: 158070 00:05:21.640 lcore 1: 158067 00:05:21.640 lcore 2: 158069 00:05:21.640 lcore 3: 158069 00:05:21.897 done. 00:05:21.897 00:05:21.897 real 0m1.451s 00:05:21.897 user 0m4.250s 00:05:21.897 sys 0m0.076s 00:05:21.897 02:16:08 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.897 02:16:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.897 ************************************ 00:05:21.897 END TEST event_perf 00:05:21.897 ************************************ 00:05:21.897 02:16:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:21.897 02:16:08 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:21.897 02:16:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.897 02:16:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.897 ************************************ 00:05:21.897 START TEST event_reactor 00:05:21.897 ************************************ 00:05:21.897 02:16:08 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:21.897 [2024-11-04 02:16:08.839249] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:21.897 [2024-11-04 02:16:08.839337] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58211 ] 00:05:21.897 [2024-11-04 02:16:08.991263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.155 [2024-11-04 02:16:09.087259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.531 test_start 00:05:23.531 oneshot 00:05:23.531 tick 100 00:05:23.531 tick 100 00:05:23.531 tick 250 00:05:23.531 tick 100 00:05:23.531 tick 100 00:05:23.531 tick 250 00:05:23.531 tick 100 00:05:23.531 tick 500 00:05:23.531 tick 100 00:05:23.531 tick 100 00:05:23.531 tick 250 00:05:23.531 tick 100 00:05:23.531 tick 100 00:05:23.531 test_end 00:05:23.531 00:05:23.531 real 0m1.432s 00:05:23.531 user 0m1.260s 00:05:23.531 sys 0m0.064s 00:05:23.531 02:16:10 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.531 02:16:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:23.531 ************************************ 00:05:23.531 END TEST event_reactor 00:05:23.531 ************************************ 00:05:23.531 02:16:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.531 02:16:10 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:23.531 02:16:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.531 02:16:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.531 ************************************ 00:05:23.531 START TEST event_reactor_perf 00:05:23.531 ************************************ 00:05:23.531 02:16:10 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.531 [2024-11-04 02:16:10.312633] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:23.531 [2024-11-04 02:16:10.312876] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58247 ] 00:05:23.531 [2024-11-04 02:16:10.474013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.531 [2024-11-04 02:16:10.575477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.914 test_start 00:05:24.914 test_end 00:05:24.914 Performance: 315422 events per second 00:05:24.914 00:05:24.914 real 0m1.412s 00:05:24.914 user 0m1.232s 00:05:24.914 sys 0m0.071s 00:05:24.914 ************************************ 00:05:24.914 END TEST event_reactor_perf 00:05:24.914 ************************************ 00:05:24.914 02:16:11 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.914 02:16:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.914 02:16:11 event -- event/event.sh@49 -- # uname -s 00:05:24.914 02:16:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:24.914 02:16:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:24.914 02:16:11 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:24.914 02:16:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:24.914 02:16:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.914 ************************************ 00:05:24.914 START TEST event_scheduler 00:05:24.914 ************************************ 00:05:24.914 02:16:11 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:24.914 * Looking for test storage... 00:05:24.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:24.914 02:16:11 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:24.914 02:16:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:24.914 02:16:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:24.914 02:16:11 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.914 02:16:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:24.914 02:16:11 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.914 02:16:11 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.914 --rc genhtml_branch_coverage=1 00:05:24.914 --rc genhtml_function_coverage=1 00:05:24.914 --rc genhtml_legend=1 00:05:24.914 --rc geninfo_all_blocks=1 00:05:24.914 --rc geninfo_unexecuted_blocks=1 00:05:24.914 00:05:24.914 ' 00:05:24.914 02:16:11 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.914 --rc genhtml_branch_coverage=1 00:05:24.914 --rc genhtml_function_coverage=1 00:05:24.914 --rc genhtml_legend=1 00:05:24.914 --rc geninfo_all_blocks=1 00:05:24.914 --rc geninfo_unexecuted_blocks=1 00:05:24.914 00:05:24.914 ' 00:05:24.914 02:16:11 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.914 --rc genhtml_branch_coverage=1 00:05:24.914 --rc genhtml_function_coverage=1 00:05:24.914 --rc genhtml_legend=1 00:05:24.914 --rc geninfo_all_blocks=1 00:05:24.914 --rc geninfo_unexecuted_blocks=1 00:05:24.914 00:05:24.914 ' 00:05:24.914 02:16:11 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.914 --rc genhtml_branch_coverage=1 00:05:24.914 --rc genhtml_function_coverage=1 00:05:24.914 --rc genhtml_legend=1 00:05:24.914 --rc geninfo_all_blocks=1 00:05:24.914 --rc geninfo_unexecuted_blocks=1 00:05:24.914 00:05:24.914 ' 00:05:24.914 02:16:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.914 02:16:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58318 00:05:24.914 02:16:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.914 02:16:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58318 00:05:24.915 02:16:11 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58318 ']' 00:05:24.915 02:16:11 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.915 02:16:11 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:24.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.915 02:16:11 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.915 02:16:11 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:24.915 02:16:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.915 02:16:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.915 [2024-11-04 02:16:11.960331] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:24.915 [2024-11-04 02:16:11.960451] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58318 ] 00:05:25.183 [2024-11-04 02:16:12.117193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.183 [2024-11-04 02:16:12.223312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.183 [2024-11-04 02:16:12.223437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.183 [2024-11-04 02:16:12.223714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.183 [2024-11-04 02:16:12.223749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.811 02:16:12 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.811 02:16:12 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:25.811 02:16:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.811 02:16:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.811 02:16:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.811 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.811 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.811 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.811 POWER: Cannot set governor of lcore 0 to performance 00:05:25.811 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.811 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.811 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.811 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.811 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:25.811 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:25.811 POWER: Unable to set Power Management Environment for lcore 0 00:05:25.811 [2024-11-04 02:16:12.805447] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:25.811 [2024-11-04 02:16:12.805468] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:25.811 [2024-11-04 02:16:12.805477] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:25.811 [2024-11-04 02:16:12.805492] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:25.811 [2024-11-04 02:16:12.805501] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:25.811 [2024-11-04 02:16:12.805510] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:25.811 02:16:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.811 02:16:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.811 02:16:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.811 02:16:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.073 [2024-11-04 02:16:13.028322] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:26.073 02:16:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.073 02:16:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:26.073 02:16:13 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.073 02:16:13 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.073 02:16:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.073 ************************************ 00:05:26.073 START TEST scheduler_create_thread 00:05:26.073 ************************************ 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.073 2 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.073 3 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.073 4 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.073 5 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.073 6 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.073 7 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.073 8 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.073 9 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.073 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.074 10 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.074 02:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.455 02:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.455 00:05:27.455 real 0m1.170s 00:05:27.455 user 0m0.015s 00:05:27.455 sys 0m0.004s 00:05:27.455 ************************************ 00:05:27.455 END TEST scheduler_create_thread 00:05:27.455 ************************************ 00:05:27.455 02:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:27.455 02:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.455 02:16:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:27.455 02:16:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58318 00:05:27.455 02:16:14 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58318 ']' 00:05:27.455 02:16:14 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58318 00:05:27.455 02:16:14 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:27.455 02:16:14 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:27.456 02:16:14 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58318 00:05:27.456 02:16:14 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:27.456 02:16:14 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:27.456 killing process with pid 58318 00:05:27.456 02:16:14 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58318' 00:05:27.456 02:16:14 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58318 00:05:27.456 02:16:14 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58318 00:05:27.714 [2024-11-04 02:16:14.689547] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.281 00:05:28.281 real 0m3.509s 00:05:28.281 user 0m5.727s 00:05:28.281 sys 0m0.334s 00:05:28.281 02:16:15 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.281 02:16:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.281 ************************************ 00:05:28.281 END TEST event_scheduler 00:05:28.281 ************************************ 00:05:28.281 02:16:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.281 02:16:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.281 02:16:15 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.281 02:16:15 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.281 02:16:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.281 ************************************ 00:05:28.281 START TEST app_repeat 00:05:28.281 ************************************ 00:05:28.281 02:16:15 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:28.281 Process app_repeat pid: 58406 00:05:28.281 spdk_app_start Round 0 00:05:28.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58406 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58406' 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58406 /var/tmp/spdk-nbd.sock 00:05:28.281 02:16:15 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.281 02:16:15 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58406 ']' 00:05:28.281 02:16:15 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.281 02:16:15 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.281 02:16:15 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.281 02:16:15 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.281 02:16:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.281 [2024-11-04 02:16:15.350837] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:28.281 [2024-11-04 02:16:15.350983] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58406 ] 00:05:28.539 [2024-11-04 02:16:15.505935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.539 [2024-11-04 02:16:15.586246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.539 [2024-11-04 02:16:15.586355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.105 02:16:16 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.105 02:16:16 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:29.105 02:16:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.363 Malloc0 00:05:29.363 02:16:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.621 Malloc1 00:05:29.621 02:16:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.621 02:16:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.879 /dev/nbd0 00:05:29.879 02:16:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.879 02:16:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.879 1+0 records in 00:05:29.879 1+0 records out 00:05:29.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422511 s, 9.7 MB/s 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:29.879 02:16:16 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:29.879 02:16:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.879 02:16:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.879 02:16:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.137 /dev/nbd1 00:05:30.137 02:16:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.137 02:16:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.137 1+0 records in 00:05:30.137 1+0 records out 00:05:30.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020817 s, 19.7 MB/s 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:30.137 02:16:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:30.137 02:16:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.137 02:16:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.137 02:16:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.137 02:16:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.137 02:16:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.395 { 00:05:30.395 "nbd_device": "/dev/nbd0", 00:05:30.395 "bdev_name": "Malloc0" 00:05:30.395 }, 00:05:30.395 { 00:05:30.395 "nbd_device": "/dev/nbd1", 00:05:30.395 "bdev_name": "Malloc1" 00:05:30.395 } 00:05:30.395 ]' 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.395 { 00:05:30.395 "nbd_device": "/dev/nbd0", 00:05:30.395 "bdev_name": "Malloc0" 00:05:30.395 }, 00:05:30.395 { 00:05:30.395 "nbd_device": "/dev/nbd1", 00:05:30.395 "bdev_name": "Malloc1" 00:05:30.395 } 00:05:30.395 ]' 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.395 /dev/nbd1' 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.395 /dev/nbd1' 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.395 256+0 records in 00:05:30.395 256+0 records out 00:05:30.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00711032 s, 147 MB/s 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.395 256+0 records in 00:05:30.395 256+0 records out 00:05:30.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166883 s, 62.8 MB/s 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.395 256+0 records in 00:05:30.395 256+0 records out 00:05:30.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179576 s, 58.4 MB/s 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.395 02:16:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.710 02:16:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.710 02:16:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.710 02:16:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.710 02:16:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.710 02:16:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.710 02:16:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.710 02:16:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.710 02:16:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.710 02:16:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.710 02:16:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.968 02:16:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.968 02:16:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.968 02:16:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.968 02:16:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.968 02:16:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.968 02:16:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.968 02:16:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.968 02:16:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.968 02:16:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.968 02:16:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.968 02:16:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.225 02:16:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.226 02:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.226 02:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.226 02:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.226 02:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.226 02:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.226 02:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.226 02:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.226 02:16:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.226 02:16:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.226 02:16:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.226 02:16:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.226 02:16:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.483 02:16:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.050 [2024-11-04 02:16:18.962822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.050 [2024-11-04 02:16:19.035772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.050 [2024-11-04 02:16:19.035883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.050 [2024-11-04 02:16:19.132145] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.050 [2024-11-04 02:16:19.132213] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.577 spdk_app_start Round 1 00:05:34.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.577 02:16:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.578 02:16:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:34.578 02:16:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58406 /var/tmp/spdk-nbd.sock 00:05:34.578 02:16:21 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58406 ']' 00:05:34.578 02:16:21 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.578 02:16:21 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:34.578 02:16:21 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.578 02:16:21 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:34.578 02:16:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.578 02:16:21 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.578 02:16:21 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:34.578 02:16:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.835 Malloc0 00:05:34.835 02:16:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.093 Malloc1 00:05:35.093 02:16:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.093 02:16:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.351 /dev/nbd0 00:05:35.351 02:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.351 02:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.351 02:16:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:35.351 02:16:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.352 1+0 records in 00:05:35.352 1+0 records out 00:05:35.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409676 s, 10.0 MB/s 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:35.352 02:16:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:35.352 02:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.352 02:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.352 02:16:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.609 /dev/nbd1 00:05:35.609 02:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.609 02:16:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.609 02:16:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:35.609 02:16:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.610 1+0 records in 00:05:35.610 1+0 records out 00:05:35.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185681 s, 22.1 MB/s 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:35.610 02:16:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:35.610 02:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.610 02:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.610 02:16:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.610 02:16:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.610 02:16:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.868 { 00:05:35.868 "nbd_device": "/dev/nbd0", 00:05:35.868 "bdev_name": "Malloc0" 00:05:35.868 }, 00:05:35.868 { 00:05:35.868 "nbd_device": "/dev/nbd1", 00:05:35.868 "bdev_name": "Malloc1" 00:05:35.868 } 00:05:35.868 ]' 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.868 { 00:05:35.868 "nbd_device": "/dev/nbd0", 00:05:35.868 "bdev_name": "Malloc0" 00:05:35.868 }, 00:05:35.868 { 00:05:35.868 "nbd_device": "/dev/nbd1", 00:05:35.868 "bdev_name": "Malloc1" 00:05:35.868 } 00:05:35.868 ]' 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.868 /dev/nbd1' 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.868 /dev/nbd1' 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.868 256+0 records in 00:05:35.868 256+0 records out 00:05:35.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00649575 s, 161 MB/s 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.868 256+0 records in 00:05:35.868 256+0 records out 00:05:35.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199921 s, 52.4 MB/s 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.868 256+0 records in 00:05:35.868 256+0 records out 00:05:35.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174858 s, 60.0 MB/s 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.868 02:16:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.190 02:16:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.448 02:16:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.448 02:16:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.706 02:16:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.272 [2024-11-04 02:16:24.338097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.531 [2024-11-04 02:16:24.413666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.531 [2024-11-04 02:16:24.413697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.531 [2024-11-04 02:16:24.516138] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.531 [2024-11-04 02:16:24.516184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.060 spdk_app_start Round 2 00:05:40.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.060 02:16:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.060 02:16:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.060 02:16:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58406 /var/tmp/spdk-nbd.sock 00:05:40.060 02:16:26 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58406 ']' 00:05:40.060 02:16:26 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.060 02:16:26 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.060 02:16:26 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.060 02:16:26 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.060 02:16:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.060 02:16:27 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.060 02:16:27 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:40.060 02:16:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.318 Malloc0 00:05:40.318 02:16:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.318 Malloc1 00:05:40.577 02:16:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.577 /dev/nbd0 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.577 1+0 records in 00:05:40.577 1+0 records out 00:05:40.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175021 s, 23.4 MB/s 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:40.577 02:16:27 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.577 02:16:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.836 /dev/nbd1 00:05:40.836 02:16:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.836 02:16:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.836 1+0 records in 00:05:40.836 1+0 records out 00:05:40.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188361 s, 21.7 MB/s 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:40.836 02:16:27 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:40.836 02:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.836 02:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.836 02:16:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.836 02:16:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.836 02:16:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.094 { 00:05:41.094 "nbd_device": "/dev/nbd0", 00:05:41.094 "bdev_name": "Malloc0" 00:05:41.094 }, 00:05:41.094 { 00:05:41.094 "nbd_device": "/dev/nbd1", 00:05:41.094 "bdev_name": "Malloc1" 00:05:41.094 } 00:05:41.094 ]' 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.094 { 00:05:41.094 "nbd_device": "/dev/nbd0", 00:05:41.094 "bdev_name": "Malloc0" 00:05:41.094 }, 00:05:41.094 { 00:05:41.094 "nbd_device": "/dev/nbd1", 00:05:41.094 "bdev_name": "Malloc1" 00:05:41.094 } 00:05:41.094 ]' 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.094 /dev/nbd1' 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.094 /dev/nbd1' 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.094 256+0 records in 00:05:41.094 256+0 records out 00:05:41.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465167 s, 225 MB/s 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.094 256+0 records in 00:05:41.094 256+0 records out 00:05:41.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170723 s, 61.4 MB/s 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.094 256+0 records in 00:05:41.094 256+0 records out 00:05:41.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146053 s, 71.8 MB/s 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.094 02:16:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.353 02:16:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.353 02:16:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.353 02:16:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.353 02:16:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.353 02:16:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.353 02:16:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.353 02:16:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.353 02:16:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.353 02:16:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.353 02:16:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.612 02:16:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.612 02:16:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.612 02:16:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.612 02:16:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.612 02:16:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.612 02:16:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.612 02:16:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.612 02:16:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.612 02:16:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.613 02:16:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.613 02:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.871 02:16:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.871 02:16:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.130 02:16:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.710 [2024-11-04 02:16:29.592465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.710 [2024-11-04 02:16:29.668304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.710 [2024-11-04 02:16:29.668410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.710 [2024-11-04 02:16:29.766373] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.710 [2024-11-04 02:16:29.766418] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.243 02:16:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58406 /var/tmp/spdk-nbd.sock 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58406 ']' 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:45.243 02:16:32 event.app_repeat -- event/event.sh@39 -- # killprocess 58406 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58406 ']' 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58406 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58406 00:05:45.243 killing process with pid 58406 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:45.243 02:16:32 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58406' 00:05:45.244 02:16:32 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58406 00:05:45.244 02:16:32 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58406 00:05:45.810 spdk_app_start is called in Round 0. 00:05:45.810 Shutdown signal received, stop current app iteration 00:05:45.810 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 reinitialization... 00:05:45.810 spdk_app_start is called in Round 1. 00:05:45.810 Shutdown signal received, stop current app iteration 00:05:45.810 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 reinitialization... 00:05:45.810 spdk_app_start is called in Round 2. 00:05:45.810 Shutdown signal received, stop current app iteration 00:05:45.810 Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 reinitialization... 00:05:45.810 spdk_app_start is called in Round 3. 00:05:45.810 Shutdown signal received, stop current app iteration 00:05:45.810 02:16:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:45.810 02:16:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:45.810 00:05:45.810 real 0m17.465s 00:05:45.810 user 0m38.220s 00:05:45.810 sys 0m2.060s 00:05:45.810 02:16:32 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.810 ************************************ 00:05:45.810 END TEST app_repeat 00:05:45.810 ************************************ 00:05:45.810 02:16:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.810 02:16:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:45.810 02:16:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.810 02:16:32 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.810 02:16:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.810 02:16:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.810 ************************************ 00:05:45.810 START TEST cpu_locks 00:05:45.810 ************************************ 00:05:45.810 02:16:32 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.810 * Looking for test storage... 00:05:45.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:45.810 02:16:32 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:45.810 02:16:32 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:45.810 02:16:32 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.068 02:16:32 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.068 02:16:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:46.069 02:16:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.069 02:16:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:46.069 02:16:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:46.069 02:16:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.069 02:16:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:46.069 02:16:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.069 02:16:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.069 02:16:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.069 02:16:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:46.069 02:16:32 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.069 02:16:32 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.069 --rc genhtml_branch_coverage=1 00:05:46.069 --rc genhtml_function_coverage=1 00:05:46.069 --rc genhtml_legend=1 00:05:46.069 --rc geninfo_all_blocks=1 00:05:46.069 --rc geninfo_unexecuted_blocks=1 00:05:46.069 00:05:46.069 ' 00:05:46.069 02:16:32 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.069 --rc genhtml_branch_coverage=1 00:05:46.069 --rc genhtml_function_coverage=1 00:05:46.069 --rc genhtml_legend=1 00:05:46.069 --rc geninfo_all_blocks=1 00:05:46.069 --rc geninfo_unexecuted_blocks=1 00:05:46.069 00:05:46.069 ' 00:05:46.069 02:16:32 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.069 --rc genhtml_branch_coverage=1 00:05:46.069 --rc genhtml_function_coverage=1 00:05:46.069 --rc genhtml_legend=1 00:05:46.069 --rc geninfo_all_blocks=1 00:05:46.069 --rc geninfo_unexecuted_blocks=1 00:05:46.069 00:05:46.069 ' 00:05:46.069 02:16:32 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.069 --rc genhtml_branch_coverage=1 00:05:46.069 --rc genhtml_function_coverage=1 00:05:46.069 --rc genhtml_legend=1 00:05:46.069 --rc geninfo_all_blocks=1 00:05:46.069 --rc geninfo_unexecuted_blocks=1 00:05:46.069 00:05:46.069 ' 00:05:46.069 02:16:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.069 02:16:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.069 02:16:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.069 02:16:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.069 02:16:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.069 02:16:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.069 02:16:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.069 ************************************ 00:05:46.069 START TEST default_locks 00:05:46.069 ************************************ 00:05:46.069 02:16:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:46.069 02:16:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58832 00:05:46.069 02:16:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58832 00:05:46.069 02:16:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58832 ']' 00:05:46.069 02:16:32 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.069 02:16:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.069 02:16:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.069 02:16:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.069 02:16:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.069 02:16:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.069 [2024-11-04 02:16:33.041025] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:46.069 [2024-11-04 02:16:33.041450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58832 ] 00:05:46.327 [2024-11-04 02:16:33.201906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.327 [2024-11-04 02:16:33.296223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.891 02:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.891 02:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:46.891 02:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58832 00:05:46.891 02:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58832 00:05:46.891 02:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.149 02:16:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58832 00:05:47.149 02:16:34 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58832 ']' 00:05:47.149 02:16:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58832 00:05:47.149 02:16:34 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:47.149 02:16:34 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:47.149 02:16:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58832 00:05:47.149 02:16:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:47.149 02:16:34 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:47.149 killing process with pid 58832 00:05:47.149 02:16:34 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58832' 00:05:47.149 02:16:34 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58832 00:05:47.149 02:16:34 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58832 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58832 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58832 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58832 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58832 ']' 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:48.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.522 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58832) - No such process 00:05:48.522 ERROR: process (pid: 58832) is no longer running 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.522 00:05:48.522 real 0m2.566s 00:05:48.522 user 0m2.485s 00:05:48.522 sys 0m0.422s 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.522 02:16:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.522 ************************************ 00:05:48.522 END TEST default_locks 00:05:48.522 ************************************ 00:05:48.522 02:16:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:48.522 02:16:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.522 02:16:35 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.522 02:16:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.522 ************************************ 00:05:48.522 START TEST default_locks_via_rpc 00:05:48.522 ************************************ 00:05:48.522 02:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:48.522 02:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58891 00:05:48.522 02:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58891 00:05:48.523 02:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58891 ']' 00:05:48.523 02:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.523 02:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:48.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.523 02:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.523 02:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:48.523 02:16:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.523 02:16:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.780 [2024-11-04 02:16:35.650071] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:48.780 [2024-11-04 02:16:35.650202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:05:48.780 [2024-11-04 02:16:35.798221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.780 [2024-11-04 02:16:35.878482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58891 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58891 00:05:49.346 02:16:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.603 02:16:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58891 00:05:49.603 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58891 ']' 00:05:49.603 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58891 00:05:49.603 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:49.603 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:49.603 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58891 00:05:49.603 killing process with pid 58891 00:05:49.603 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:49.604 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:49.604 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58891' 00:05:49.604 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58891 00:05:49.604 02:16:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58891 00:05:50.977 00:05:50.977 real 0m2.221s 00:05:50.977 user 0m2.207s 00:05:50.977 sys 0m0.408s 00:05:50.977 02:16:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.977 02:16:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.977 ************************************ 00:05:50.977 END TEST default_locks_via_rpc 00:05:50.977 ************************************ 00:05:50.977 02:16:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:50.977 02:16:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.977 02:16:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.977 02:16:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.977 ************************************ 00:05:50.977 START TEST non_locking_app_on_locked_coremask 00:05:50.977 ************************************ 00:05:50.977 02:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:50.977 02:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58943 00:05:50.977 02:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58943 /var/tmp/spdk.sock 00:05:50.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.977 02:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58943 ']' 00:05:50.977 02:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.977 02:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.977 02:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.977 02:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.977 02:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.977 02:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.977 [2024-11-04 02:16:37.891540] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:50.977 [2024-11-04 02:16:37.891628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58943 ] 00:05:50.977 [2024-11-04 02:16:38.041528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.235 [2024-11-04 02:16:38.121612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.801 02:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:51.801 02:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:51.801 02:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:51.801 02:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58959 00:05:51.801 02:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58959 /var/tmp/spdk2.sock 00:05:51.801 02:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58959 ']' 00:05:51.801 02:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.801 02:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:51.801 02:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.801 02:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:51.801 02:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.801 [2024-11-04 02:16:38.794770] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:51.801 [2024-11-04 02:16:38.795079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58959 ] 00:05:52.059 [2024-11-04 02:16:38.960314] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.059 [2024-11-04 02:16:38.960352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.059 [2024-11-04 02:16:39.120544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.025 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:53.025 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:53.025 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58943 00:05:53.025 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58943 00:05:53.025 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.283 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58943 00:05:53.283 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58943 ']' 00:05:53.283 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58943 00:05:53.283 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:53.283 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:53.283 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58943 00:05:53.283 killing process with pid 58943 00:05:53.283 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:53.283 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:53.283 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58943' 00:05:53.283 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58943 00:05:53.283 02:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58943 00:05:55.809 02:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58959 00:05:55.809 02:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58959 ']' 00:05:55.809 02:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58959 00:05:55.809 02:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:55.809 02:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:55.809 02:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58959 00:05:55.809 killing process with pid 58959 00:05:55.809 02:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:55.809 02:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:55.809 02:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58959' 00:05:55.809 02:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58959 00:05:55.809 02:16:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58959 00:05:57.183 00:05:57.183 real 0m6.049s 00:05:57.183 user 0m6.301s 00:05:57.183 sys 0m0.767s 00:05:57.183 02:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.183 ************************************ 00:05:57.183 END TEST non_locking_app_on_locked_coremask 00:05:57.183 ************************************ 00:05:57.183 02:16:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.183 02:16:43 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:57.183 02:16:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.183 02:16:43 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.183 02:16:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.183 ************************************ 00:05:57.183 START TEST locking_app_on_unlocked_coremask 00:05:57.183 ************************************ 00:05:57.183 02:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:57.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.183 02:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59050 00:05:57.183 02:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59050 /var/tmp/spdk.sock 00:05:57.183 02:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59050 ']' 00:05:57.183 02:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.183 02:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:57.183 02:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.183 02:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:57.183 02:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.183 02:16:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:57.183 [2024-11-04 02:16:44.018084] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:57.183 [2024-11-04 02:16:44.018218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59050 ] 00:05:57.183 [2024-11-04 02:16:44.177512] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.183 [2024-11-04 02:16:44.177580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.183 [2024-11-04 02:16:44.266942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.749 02:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.749 02:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:57.749 02:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:57.749 02:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59066 00:05:57.749 02:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59066 /var/tmp/spdk2.sock 00:05:57.749 02:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59066 ']' 00:05:57.749 02:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.749 02:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:57.749 02:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.749 02:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:57.749 02:16:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.040 [2024-11-04 02:16:44.911617] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:05:58.040 [2024-11-04 02:16:44.911907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59066 ] 00:05:58.040 [2024-11-04 02:16:45.073208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.299 [2024-11-04 02:16:45.232792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.334 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:59.334 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:59.334 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59066 00:05:59.334 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59066 00:05:59.334 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.334 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59050 00:05:59.334 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59050 ']' 00:05:59.334 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59050 00:05:59.334 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:59.334 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:59.334 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59050 00:05:59.592 killing process with pid 59050 00:05:59.592 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:59.592 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:59.592 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59050' 00:05:59.592 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59050 00:05:59.592 02:16:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59050 00:06:02.121 02:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59066 00:06:02.121 02:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59066 ']' 00:06:02.121 02:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59066 00:06:02.121 02:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:02.121 02:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.121 02:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59066 00:06:02.121 killing process with pid 59066 00:06:02.121 02:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.121 02:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.121 02:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59066' 00:06:02.121 02:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59066 00:06:02.121 02:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59066 00:06:03.057 00:06:03.057 real 0m6.044s 00:06:03.057 user 0m6.313s 00:06:03.057 sys 0m0.800s 00:06:03.057 02:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.057 ************************************ 00:06:03.057 END TEST locking_app_on_unlocked_coremask 00:06:03.057 ************************************ 00:06:03.057 02:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.057 02:16:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.057 02:16:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.057 02:16:50 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.057 02:16:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.057 ************************************ 00:06:03.057 START TEST locking_app_on_locked_coremask 00:06:03.057 ************************************ 00:06:03.057 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:03.057 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59157 00:06:03.057 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59157 /var/tmp/spdk.sock 00:06:03.057 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59157 ']' 00:06:03.057 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.057 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.057 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:03.057 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.057 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:03.057 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.057 [2024-11-04 02:16:50.098135] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:03.057 [2024-11-04 02:16:50.098246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59157 ] 00:06:03.314 [2024-11-04 02:16:50.244058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.314 [2024-11-04 02:16:50.324355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59173 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59173 /var/tmp/spdk2.sock 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59173 /var/tmp/spdk2.sock 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:03.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59173 /var/tmp/spdk2.sock 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59173 ']' 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:03.880 02:16:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.880 [2024-11-04 02:16:50.960376] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:03.880 [2024-11-04 02:16:50.960499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:06:04.138 [2024-11-04 02:16:51.124508] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59157 has claimed it. 00:06:04.138 [2024-11-04 02:16:51.124561] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.705 ERROR: process (pid: 59173) is no longer running 00:06:04.705 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59173) - No such process 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59157 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59157 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59157 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59157 ']' 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59157 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:04.705 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59157 00:06:04.963 killing process with pid 59157 00:06:04.963 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:04.963 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:04.963 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59157' 00:06:04.963 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59157 00:06:04.963 02:16:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59157 00:06:05.898 00:06:05.898 real 0m2.950s 00:06:05.898 user 0m3.148s 00:06:05.898 sys 0m0.534s 00:06:05.898 02:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.898 02:16:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.898 ************************************ 00:06:05.898 END TEST locking_app_on_locked_coremask 00:06:05.898 ************************************ 00:06:06.156 02:16:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.156 02:16:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:06.156 02:16:53 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.156 02:16:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.156 ************************************ 00:06:06.156 START TEST locking_overlapped_coremask 00:06:06.156 ************************************ 00:06:06.156 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:06.156 02:16:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59226 00:06:06.156 02:16:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59226 /var/tmp/spdk.sock 00:06:06.156 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59226 ']' 00:06:06.156 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.156 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:06.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.156 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.156 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:06.156 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.156 02:16:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.156 [2024-11-04 02:16:53.094362] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:06.156 [2024-11-04 02:16:53.094947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59226 ] 00:06:06.156 [2024-11-04 02:16:53.246695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.413 [2024-11-04 02:16:53.347344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.413 [2024-11-04 02:16:53.347616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.413 [2024-11-04 02:16:53.347642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59244 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59244 /var/tmp/spdk2.sock 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59244 /var/tmp/spdk2.sock 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:06.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59244 /var/tmp/spdk2.sock 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59244 ']' 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:06.978 02:16:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.978 [2024-11-04 02:16:54.004594] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:06.978 [2024-11-04 02:16:54.004715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59244 ] 00:06:07.235 [2024-11-04 02:16:54.178727] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59226 has claimed it. 00:06:07.236 [2024-11-04 02:16:54.178785] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.800 ERROR: process (pid: 59244) is no longer running 00:06:07.800 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59244) - No such process 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59226 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59226 ']' 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59226 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59226 00:06:07.800 killing process with pid 59226 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59226' 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59226 00:06:07.800 02:16:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59226 00:06:08.733 00:06:08.733 real 0m2.800s 00:06:08.733 user 0m7.623s 00:06:08.733 sys 0m0.412s 00:06:08.733 02:16:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.733 02:16:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.733 ************************************ 00:06:08.733 END TEST locking_overlapped_coremask 00:06:08.733 ************************************ 00:06:08.991 02:16:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:08.991 02:16:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.991 02:16:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.991 02:16:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.991 ************************************ 00:06:08.991 START TEST locking_overlapped_coremask_via_rpc 00:06:08.991 ************************************ 00:06:08.991 02:16:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:08.991 02:16:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59297 00:06:08.991 02:16:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59297 /var/tmp/spdk.sock 00:06:08.991 02:16:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59297 ']' 00:06:08.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.991 02:16:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.991 02:16:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.991 02:16:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.991 02:16:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.991 02:16:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.991 02:16:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:08.991 [2024-11-04 02:16:55.925202] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:08.991 [2024-11-04 02:16:55.925293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:06:08.991 [2024-11-04 02:16:56.074219] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.991 [2024-11-04 02:16:56.074256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.299 [2024-11-04 02:16:56.156847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.299 [2024-11-04 02:16:56.157042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.299 [2024-11-04 02:16:56.157151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.864 02:16:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.864 02:16:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:09.864 02:16:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59315 00:06:09.864 02:16:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59315 /var/tmp/spdk2.sock 00:06:09.864 02:16:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59315 ']' 00:06:09.864 02:16:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:09.864 02:16:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.864 02:16:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.864 02:16:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.864 02:16:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.864 02:16:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.864 [2024-11-04 02:16:56.846790] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:09.864 [2024-11-04 02:16:56.847119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59315 ] 00:06:10.121 [2024-11-04 02:16:57.010241] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.121 [2024-11-04 02:16:57.010283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.122 [2024-11-04 02:16:57.177337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.122 [2024-11-04 02:16:57.177486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.122 [2024-11-04 02:16:57.177514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.054 [2024-11-04 02:16:58.115978] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59297 has claimed it. 00:06:11.054 request: 00:06:11.054 { 00:06:11.054 "method": "framework_enable_cpumask_locks", 00:06:11.054 "req_id": 1 00:06:11.054 } 00:06:11.054 Got JSON-RPC error response 00:06:11.054 response: 00:06:11.054 { 00:06:11.054 "code": -32603, 00:06:11.054 "message": "Failed to claim CPU core: 2" 00:06:11.054 } 00:06:11.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.054 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59297 /var/tmp/spdk.sock 00:06:11.055 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59297 ']' 00:06:11.055 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.055 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:11.055 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.055 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:11.055 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.313 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:11.313 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:11.313 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59315 /var/tmp/spdk2.sock 00:06:11.313 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59315 ']' 00:06:11.313 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.313 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:11.313 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.313 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:11.313 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.572 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:11.572 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:11.572 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:11.572 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.572 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.572 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.572 00:06:11.572 real 0m2.669s 00:06:11.572 user 0m1.059s 00:06:11.572 sys 0m0.118s 00:06:11.572 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:11.572 02:16:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.572 ************************************ 00:06:11.572 END TEST locking_overlapped_coremask_via_rpc 00:06:11.572 ************************************ 00:06:11.572 02:16:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:11.572 02:16:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59297 ]] 00:06:11.572 02:16:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59297 00:06:11.572 02:16:58 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59297 ']' 00:06:11.572 02:16:58 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59297 00:06:11.572 02:16:58 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:11.572 02:16:58 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:11.572 02:16:58 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59297 00:06:11.572 02:16:58 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:11.572 killing process with pid 59297 00:06:11.573 02:16:58 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:11.573 02:16:58 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59297' 00:06:11.573 02:16:58 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59297 00:06:11.573 02:16:58 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59297 00:06:12.945 02:16:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59315 ]] 00:06:12.945 02:16:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59315 00:06:12.945 02:16:59 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59315 ']' 00:06:12.945 02:16:59 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59315 00:06:12.945 02:16:59 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:12.945 02:16:59 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:12.945 02:16:59 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59315 00:06:12.945 killing process with pid 59315 00:06:12.945 02:16:59 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:12.945 02:16:59 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:12.945 02:16:59 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59315' 00:06:12.945 02:16:59 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59315 00:06:12.945 02:16:59 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59315 00:06:14.322 02:17:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.322 Process with pid 59297 is not found 00:06:14.322 Process with pid 59315 is not found 00:06:14.322 02:17:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.322 02:17:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59297 ]] 00:06:14.322 02:17:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59297 00:06:14.322 02:17:01 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59297 ']' 00:06:14.322 02:17:01 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59297 00:06:14.322 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59297) - No such process 00:06:14.322 02:17:01 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59297 is not found' 00:06:14.322 02:17:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59315 ]] 00:06:14.322 02:17:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59315 00:06:14.322 02:17:01 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59315 ']' 00:06:14.322 02:17:01 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59315 00:06:14.322 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59315) - No such process 00:06:14.322 02:17:01 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59315 is not found' 00:06:14.322 02:17:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.322 ************************************ 00:06:14.322 END TEST cpu_locks 00:06:14.322 ************************************ 00:06:14.322 00:06:14.322 real 0m28.395s 00:06:14.322 user 0m48.812s 00:06:14.322 sys 0m4.232s 00:06:14.322 02:17:01 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.322 02:17:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.322 ************************************ 00:06:14.322 END TEST event 00:06:14.322 ************************************ 00:06:14.322 00:06:14.322 real 0m54.073s 00:06:14.323 user 1m39.667s 00:06:14.323 sys 0m7.067s 00:06:14.323 02:17:01 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.323 02:17:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.323 02:17:01 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.323 02:17:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.323 02:17:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.323 02:17:01 -- common/autotest_common.sh@10 -- # set +x 00:06:14.323 ************************************ 00:06:14.323 START TEST thread 00:06:14.323 ************************************ 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.323 * Looking for test storage... 00:06:14.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:14.323 02:17:01 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.323 02:17:01 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.323 02:17:01 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.323 02:17:01 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.323 02:17:01 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.323 02:17:01 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.323 02:17:01 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.323 02:17:01 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.323 02:17:01 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.323 02:17:01 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.323 02:17:01 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.323 02:17:01 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:14.323 02:17:01 thread -- scripts/common.sh@345 -- # : 1 00:06:14.323 02:17:01 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.323 02:17:01 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.323 02:17:01 thread -- scripts/common.sh@365 -- # decimal 1 00:06:14.323 02:17:01 thread -- scripts/common.sh@353 -- # local d=1 00:06:14.323 02:17:01 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.323 02:17:01 thread -- scripts/common.sh@355 -- # echo 1 00:06:14.323 02:17:01 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.323 02:17:01 thread -- scripts/common.sh@366 -- # decimal 2 00:06:14.323 02:17:01 thread -- scripts/common.sh@353 -- # local d=2 00:06:14.323 02:17:01 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.323 02:17:01 thread -- scripts/common.sh@355 -- # echo 2 00:06:14.323 02:17:01 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.323 02:17:01 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.323 02:17:01 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.323 02:17:01 thread -- scripts/common.sh@368 -- # return 0 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:14.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.323 --rc genhtml_branch_coverage=1 00:06:14.323 --rc genhtml_function_coverage=1 00:06:14.323 --rc genhtml_legend=1 00:06:14.323 --rc geninfo_all_blocks=1 00:06:14.323 --rc geninfo_unexecuted_blocks=1 00:06:14.323 00:06:14.323 ' 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:14.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.323 --rc genhtml_branch_coverage=1 00:06:14.323 --rc genhtml_function_coverage=1 00:06:14.323 --rc genhtml_legend=1 00:06:14.323 --rc geninfo_all_blocks=1 00:06:14.323 --rc geninfo_unexecuted_blocks=1 00:06:14.323 00:06:14.323 ' 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:14.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.323 --rc genhtml_branch_coverage=1 00:06:14.323 --rc genhtml_function_coverage=1 00:06:14.323 --rc genhtml_legend=1 00:06:14.323 --rc geninfo_all_blocks=1 00:06:14.323 --rc geninfo_unexecuted_blocks=1 00:06:14.323 00:06:14.323 ' 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:14.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.323 --rc genhtml_branch_coverage=1 00:06:14.323 --rc genhtml_function_coverage=1 00:06:14.323 --rc genhtml_legend=1 00:06:14.323 --rc geninfo_all_blocks=1 00:06:14.323 --rc geninfo_unexecuted_blocks=1 00:06:14.323 00:06:14.323 ' 00:06:14.323 02:17:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.323 02:17:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.323 ************************************ 00:06:14.323 START TEST thread_poller_perf 00:06:14.323 ************************************ 00:06:14.323 02:17:01 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.582 [2024-11-04 02:17:01.455143] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:14.582 [2024-11-04 02:17:01.455255] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59464 ] 00:06:14.582 [2024-11-04 02:17:01.614205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.840 [2024-11-04 02:17:01.713516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.840 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:15.774 [2024-11-04T02:17:02.885Z] ====================================== 00:06:15.774 [2024-11-04T02:17:02.885Z] busy:2610463428 (cyc) 00:06:15.774 [2024-11-04T02:17:02.885Z] total_run_count: 307000 00:06:15.774 [2024-11-04T02:17:02.885Z] tsc_hz: 2600000000 (cyc) 00:06:15.774 [2024-11-04T02:17:02.885Z] ====================================== 00:06:15.775 [2024-11-04T02:17:02.886Z] poller_cost: 8503 (cyc), 3270 (nsec) 00:06:15.775 00:06:15.775 real 0m1.455s 00:06:15.775 user 0m1.273s 00:06:15.775 sys 0m0.075s 00:06:15.775 02:17:02 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:15.775 02:17:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.775 ************************************ 00:06:15.775 END TEST thread_poller_perf 00:06:15.775 ************************************ 00:06:16.071 02:17:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.071 02:17:02 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:16.071 02:17:02 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.071 02:17:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.071 ************************************ 00:06:16.071 START TEST thread_poller_perf 00:06:16.071 ************************************ 00:06:16.071 02:17:02 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.071 [2024-11-04 02:17:02.945626] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:16.071 [2024-11-04 02:17:02.945790] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59506 ] 00:06:16.071 [2024-11-04 02:17:03.101185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.329 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:16.329 [2024-11-04 02:17:03.196885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.264 [2024-11-04T02:17:04.375Z] ====================================== 00:06:17.264 [2024-11-04T02:17:04.375Z] busy:2603238970 (cyc) 00:06:17.264 [2024-11-04T02:17:04.375Z] total_run_count: 3941000 00:06:17.264 [2024-11-04T02:17:04.375Z] tsc_hz: 2600000000 (cyc) 00:06:17.264 [2024-11-04T02:17:04.375Z] ====================================== 00:06:17.264 [2024-11-04T02:17:04.375Z] poller_cost: 660 (cyc), 253 (nsec) 00:06:17.264 00:06:17.264 real 0m1.433s 00:06:17.264 user 0m1.262s 00:06:17.264 sys 0m0.063s 00:06:17.264 ************************************ 00:06:17.264 END TEST thread_poller_perf 00:06:17.264 ************************************ 00:06:17.264 02:17:04 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.264 02:17:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.523 02:17:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:17.523 ************************************ 00:06:17.523 END TEST thread 00:06:17.523 ************************************ 00:06:17.523 00:06:17.523 real 0m3.108s 00:06:17.523 user 0m2.642s 00:06:17.523 sys 0m0.246s 00:06:17.523 02:17:04 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.523 02:17:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.523 02:17:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:17.523 02:17:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:17.523 02:17:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.523 02:17:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.523 02:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:17.523 ************************************ 00:06:17.523 START TEST app_cmdline 00:06:17.523 ************************************ 00:06:17.523 02:17:04 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:17.523 * Looking for test storage... 00:06:17.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:17.523 02:17:04 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:17.523 02:17:04 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:17.523 02:17:04 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:17.523 02:17:04 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:17.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.523 02:17:04 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:17.523 02:17:04 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.523 02:17:04 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:17.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.523 --rc genhtml_branch_coverage=1 00:06:17.523 --rc genhtml_function_coverage=1 00:06:17.523 --rc genhtml_legend=1 00:06:17.523 --rc geninfo_all_blocks=1 00:06:17.523 --rc geninfo_unexecuted_blocks=1 00:06:17.523 00:06:17.523 ' 00:06:17.523 02:17:04 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:17.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.523 --rc genhtml_branch_coverage=1 00:06:17.523 --rc genhtml_function_coverage=1 00:06:17.523 --rc genhtml_legend=1 00:06:17.523 --rc geninfo_all_blocks=1 00:06:17.523 --rc geninfo_unexecuted_blocks=1 00:06:17.523 00:06:17.523 ' 00:06:17.523 02:17:04 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:17.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.523 --rc genhtml_branch_coverage=1 00:06:17.523 --rc genhtml_function_coverage=1 00:06:17.523 --rc genhtml_legend=1 00:06:17.523 --rc geninfo_all_blocks=1 00:06:17.523 --rc geninfo_unexecuted_blocks=1 00:06:17.523 00:06:17.523 ' 00:06:17.523 02:17:04 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:17.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.523 --rc genhtml_branch_coverage=1 00:06:17.523 --rc genhtml_function_coverage=1 00:06:17.523 --rc genhtml_legend=1 00:06:17.523 --rc geninfo_all_blocks=1 00:06:17.524 --rc geninfo_unexecuted_blocks=1 00:06:17.524 00:06:17.524 ' 00:06:17.524 02:17:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:17.524 02:17:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59584 00:06:17.524 02:17:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59584 00:06:17.524 02:17:04 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59584 ']' 00:06:17.524 02:17:04 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.524 02:17:04 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:17.524 02:17:04 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.524 02:17:04 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:17.524 02:17:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.524 02:17:04 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:17.784 [2024-11-04 02:17:04.644549] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:17.784 [2024-11-04 02:17:04.644663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59584 ] 00:06:17.784 [2024-11-04 02:17:04.795361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.042 [2024-11-04 02:17:04.894455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.609 02:17:05 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:18.610 02:17:05 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:18.610 { 00:06:18.610 "version": "SPDK v25.01-pre git sha1 fa3ab7384", 00:06:18.610 "fields": { 00:06:18.610 "major": 25, 00:06:18.610 "minor": 1, 00:06:18.610 "patch": 0, 00:06:18.610 "suffix": "-pre", 00:06:18.610 "commit": "fa3ab7384" 00:06:18.610 } 00:06:18.610 } 00:06:18.610 02:17:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:18.610 02:17:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:18.610 02:17:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:18.610 02:17:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:18.610 02:17:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:18.610 02:17:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:18.610 02:17:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.610 02:17:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:18.610 02:17:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:18.610 02:17:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:18.610 02:17:05 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.869 request: 00:06:18.869 { 00:06:18.869 "method": "env_dpdk_get_mem_stats", 00:06:18.869 "req_id": 1 00:06:18.869 } 00:06:18.869 Got JSON-RPC error response 00:06:18.869 response: 00:06:18.869 { 00:06:18.869 "code": -32601, 00:06:18.869 "message": "Method not found" 00:06:18.869 } 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.869 02:17:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59584 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59584 ']' 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59584 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59584 00:06:18.869 killing process with pid 59584 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59584' 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@971 -- # kill 59584 00:06:18.869 02:17:05 app_cmdline -- common/autotest_common.sh@976 -- # wait 59584 00:06:20.246 00:06:20.246 real 0m2.866s 00:06:20.246 user 0m3.184s 00:06:20.246 sys 0m0.421s 00:06:20.246 02:17:07 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.246 ************************************ 00:06:20.246 END TEST app_cmdline 00:06:20.246 ************************************ 00:06:20.246 02:17:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.246 02:17:07 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:20.246 02:17:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.246 02:17:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.246 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:06:20.246 ************************************ 00:06:20.246 START TEST version 00:06:20.246 ************************************ 00:06:20.246 02:17:07 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:20.505 * Looking for test storage... 00:06:20.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:20.505 02:17:07 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:20.505 02:17:07 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:20.505 02:17:07 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:20.505 02:17:07 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:20.505 02:17:07 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.505 02:17:07 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.505 02:17:07 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.505 02:17:07 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.505 02:17:07 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.505 02:17:07 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.505 02:17:07 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.505 02:17:07 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.505 02:17:07 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.505 02:17:07 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.505 02:17:07 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.505 02:17:07 version -- scripts/common.sh@344 -- # case "$op" in 00:06:20.505 02:17:07 version -- scripts/common.sh@345 -- # : 1 00:06:20.505 02:17:07 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.505 02:17:07 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.505 02:17:07 version -- scripts/common.sh@365 -- # decimal 1 00:06:20.505 02:17:07 version -- scripts/common.sh@353 -- # local d=1 00:06:20.505 02:17:07 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.505 02:17:07 version -- scripts/common.sh@355 -- # echo 1 00:06:20.505 02:17:07 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.505 02:17:07 version -- scripts/common.sh@366 -- # decimal 2 00:06:20.505 02:17:07 version -- scripts/common.sh@353 -- # local d=2 00:06:20.505 02:17:07 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.505 02:17:07 version -- scripts/common.sh@355 -- # echo 2 00:06:20.505 02:17:07 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.505 02:17:07 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.505 02:17:07 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.505 02:17:07 version -- scripts/common.sh@368 -- # return 0 00:06:20.505 02:17:07 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.505 02:17:07 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:20.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.505 --rc genhtml_branch_coverage=1 00:06:20.505 --rc genhtml_function_coverage=1 00:06:20.505 --rc genhtml_legend=1 00:06:20.505 --rc geninfo_all_blocks=1 00:06:20.505 --rc geninfo_unexecuted_blocks=1 00:06:20.505 00:06:20.505 ' 00:06:20.505 02:17:07 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:20.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.505 --rc genhtml_branch_coverage=1 00:06:20.505 --rc genhtml_function_coverage=1 00:06:20.505 --rc genhtml_legend=1 00:06:20.505 --rc geninfo_all_blocks=1 00:06:20.505 --rc geninfo_unexecuted_blocks=1 00:06:20.505 00:06:20.505 ' 00:06:20.505 02:17:07 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:20.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.505 --rc genhtml_branch_coverage=1 00:06:20.505 --rc genhtml_function_coverage=1 00:06:20.505 --rc genhtml_legend=1 00:06:20.505 --rc geninfo_all_blocks=1 00:06:20.505 --rc geninfo_unexecuted_blocks=1 00:06:20.505 00:06:20.505 ' 00:06:20.505 02:17:07 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:20.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.505 --rc genhtml_branch_coverage=1 00:06:20.505 --rc genhtml_function_coverage=1 00:06:20.505 --rc genhtml_legend=1 00:06:20.505 --rc geninfo_all_blocks=1 00:06:20.505 --rc geninfo_unexecuted_blocks=1 00:06:20.505 00:06:20.505 ' 00:06:20.506 02:17:07 version -- app/version.sh@17 -- # get_header_version major 00:06:20.506 02:17:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:20.506 02:17:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.506 02:17:07 version -- app/version.sh@14 -- # cut -f2 00:06:20.506 02:17:07 version -- app/version.sh@17 -- # major=25 00:06:20.506 02:17:07 version -- app/version.sh@18 -- # get_header_version minor 00:06:20.506 02:17:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:20.506 02:17:07 version -- app/version.sh@14 -- # cut -f2 00:06:20.506 02:17:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.506 02:17:07 version -- app/version.sh@18 -- # minor=1 00:06:20.506 02:17:07 version -- app/version.sh@19 -- # get_header_version patch 00:06:20.506 02:17:07 version -- app/version.sh@14 -- # cut -f2 00:06:20.506 02:17:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:20.506 02:17:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.506 02:17:07 version -- app/version.sh@19 -- # patch=0 00:06:20.506 02:17:07 version -- app/version.sh@20 -- # get_header_version suffix 00:06:20.506 02:17:07 version -- app/version.sh@14 -- # cut -f2 00:06:20.506 02:17:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:20.506 02:17:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:20.506 02:17:07 version -- app/version.sh@20 -- # suffix=-pre 00:06:20.506 02:17:07 version -- app/version.sh@22 -- # version=25.1 00:06:20.506 02:17:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:20.506 02:17:07 version -- app/version.sh@28 -- # version=25.1rc0 00:06:20.506 02:17:07 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:20.506 02:17:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:20.506 02:17:07 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:20.506 02:17:07 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:20.506 00:06:20.506 real 0m0.182s 00:06:20.506 user 0m0.111s 00:06:20.506 sys 0m0.098s 00:06:20.506 ************************************ 00:06:20.506 END TEST version 00:06:20.506 ************************************ 00:06:20.506 02:17:07 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.506 02:17:07 version -- common/autotest_common.sh@10 -- # set +x 00:06:20.506 02:17:07 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:20.506 02:17:07 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:20.506 02:17:07 -- spdk/autotest.sh@194 -- # uname -s 00:06:20.506 02:17:07 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:20.506 02:17:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:20.506 02:17:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:20.506 02:17:07 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:20.506 02:17:07 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:20.506 02:17:07 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:20.506 02:17:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.506 02:17:07 -- common/autotest_common.sh@10 -- # set +x 00:06:20.506 ************************************ 00:06:20.506 START TEST blockdev_nvme 00:06:20.506 ************************************ 00:06:20.506 02:17:07 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:20.506 * Looking for test storage... 00:06:20.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:20.809 02:17:07 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:20.809 02:17:07 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.810 02:17:07 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:20.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.810 --rc genhtml_branch_coverage=1 00:06:20.810 --rc genhtml_function_coverage=1 00:06:20.810 --rc genhtml_legend=1 00:06:20.810 --rc geninfo_all_blocks=1 00:06:20.810 --rc geninfo_unexecuted_blocks=1 00:06:20.810 00:06:20.810 ' 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:20.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.810 --rc genhtml_branch_coverage=1 00:06:20.810 --rc genhtml_function_coverage=1 00:06:20.810 --rc genhtml_legend=1 00:06:20.810 --rc geninfo_all_blocks=1 00:06:20.810 --rc geninfo_unexecuted_blocks=1 00:06:20.810 00:06:20.810 ' 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:20.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.810 --rc genhtml_branch_coverage=1 00:06:20.810 --rc genhtml_function_coverage=1 00:06:20.810 --rc genhtml_legend=1 00:06:20.810 --rc geninfo_all_blocks=1 00:06:20.810 --rc geninfo_unexecuted_blocks=1 00:06:20.810 00:06:20.810 ' 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:20.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.810 --rc genhtml_branch_coverage=1 00:06:20.810 --rc genhtml_function_coverage=1 00:06:20.810 --rc genhtml_legend=1 00:06:20.810 --rc geninfo_all_blocks=1 00:06:20.810 --rc geninfo_unexecuted_blocks=1 00:06:20.810 00:06:20.810 ' 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:20.810 02:17:07 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:20.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59762 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59762 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 59762 ']' 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.810 02:17:07 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.810 02:17:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.810 [2024-11-04 02:17:07.768367] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:20.810 [2024-11-04 02:17:07.768480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59762 ] 00:06:21.096 [2024-11-04 02:17:07.927140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.096 [2024-11-04 02:17:08.026969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.662 02:17:08 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.662 02:17:08 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:06:21.662 02:17:08 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:21.662 02:17:08 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:21.662 02:17:08 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:21.662 02:17:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:21.662 02:17:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:21.662 02:17:08 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:21.662 02:17:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.662 02:17:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.920 02:17:08 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.920 02:17:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:21.920 02:17:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.920 02:17:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.920 02:17:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.920 02:17:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:21.920 02:17:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.920 02:17:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.920 02:17:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:21.920 02:17:09 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.920 02:17:09 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:22.179 02:17:09 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:22.179 02:17:09 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c3d95597-df9b-4a54-8608-8b712b8d34cf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c3d95597-df9b-4a54-8608-8b712b8d34cf",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "0134ff32-c442-44c5-ab24-5736b083feaf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0134ff32-c442-44c5-ab24-5736b083feaf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ae8175a2-54b0-4cc9-bc40-a133ede6f3d1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ae8175a2-54b0-4cc9-bc40-a133ede6f3d1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "21d79f97-7a62-4f8a-9dad-78db02262f77"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "21d79f97-7a62-4f8a-9dad-78db02262f77",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "df49be96-ac8d-4b6a-b0a2-fbba011edf9a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "df49be96-ac8d-4b6a-b0a2-fbba011edf9a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "774108f1-03ba-46c6-8161-0be1f7bd1525"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "774108f1-03ba-46c6-8161-0be1f7bd1525",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:22.179 02:17:09 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:22.179 02:17:09 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:22.179 02:17:09 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:22.179 02:17:09 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59762 00:06:22.179 02:17:09 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 59762 ']' 00:06:22.179 02:17:09 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 59762 00:06:22.180 02:17:09 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:06:22.180 02:17:09 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.180 02:17:09 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59762 00:06:22.180 killing process with pid 59762 00:06:22.180 02:17:09 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:22.180 02:17:09 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:22.180 02:17:09 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59762' 00:06:22.180 02:17:09 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 59762 00:06:22.180 02:17:09 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 59762 00:06:23.558 02:17:10 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:23.558 02:17:10 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:23.559 02:17:10 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:23.559 02:17:10 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:23.559 02:17:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.559 ************************************ 00:06:23.559 START TEST bdev_hello_world 00:06:23.559 ************************************ 00:06:23.559 02:17:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:23.559 [2024-11-04 02:17:10.633236] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:23.559 [2024-11-04 02:17:10.633479] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59840 ] 00:06:23.818 [2024-11-04 02:17:10.789344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.818 [2024-11-04 02:17:10.888666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.387 [2024-11-04 02:17:11.429027] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:24.387 [2024-11-04 02:17:11.429069] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:24.387 [2024-11-04 02:17:11.429088] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:24.387 [2024-11-04 02:17:11.431466] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:24.387 [2024-11-04 02:17:11.432158] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:24.387 [2024-11-04 02:17:11.432185] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:24.387 [2024-11-04 02:17:11.432570] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:24.387 00:06:24.387 [2024-11-04 02:17:11.432597] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:25.324 ************************************ 00:06:25.324 END TEST bdev_hello_world 00:06:25.324 ************************************ 00:06:25.324 00:06:25.324 real 0m1.540s 00:06:25.324 user 0m1.261s 00:06:25.324 sys 0m0.172s 00:06:25.324 02:17:12 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.324 02:17:12 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:25.324 02:17:12 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:25.324 02:17:12 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:25.324 02:17:12 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.324 02:17:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.324 ************************************ 00:06:25.324 START TEST bdev_bounds 00:06:25.324 ************************************ 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59882 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59882' 00:06:25.324 Process bdevio pid: 59882 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59882 00:06:25.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 59882 ']' 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.324 02:17:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:25.324 [2024-11-04 02:17:12.230656] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:25.324 [2024-11-04 02:17:12.230771] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59882 ] 00:06:25.324 [2024-11-04 02:17:12.390032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.583 [2024-11-04 02:17:12.474162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.583 [2024-11-04 02:17:12.474455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.583 [2024-11-04 02:17:12.474475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.148 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.148 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:06:26.148 02:17:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:26.148 I/O targets: 00:06:26.148 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:26.148 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:26.148 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:26.148 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:26.148 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:26.148 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:26.148 00:06:26.148 00:06:26.148 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.148 http://cunit.sourceforge.net/ 00:06:26.148 00:06:26.148 00:06:26.148 Suite: bdevio tests on: Nvme3n1 00:06:26.148 Test: blockdev write read block ...passed 00:06:26.148 Test: blockdev write zeroes read block ...passed 00:06:26.148 Test: blockdev write zeroes read no split ...passed 00:06:26.148 Test: blockdev write zeroes read split ...passed 00:06:26.148 Test: blockdev write zeroes read split partial ...passed 00:06:26.148 Test: blockdev reset ...[2024-11-04 02:17:13.202465] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:26.148 passed 00:06:26.148 Test: blockdev write read 8 blocks ...[2024-11-04 02:17:13.205483] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:26.148 passed 00:06:26.148 Test: blockdev write read size > 128k ...passed 00:06:26.148 Test: blockdev write read invalid size ...passed 00:06:26.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.148 Test: blockdev write read max offset ...passed 00:06:26.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.148 Test: blockdev writev readv 8 blocks ...passed 00:06:26.148 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.148 Test: blockdev writev readv block ...passed 00:06:26.148 Test: blockdev writev readv size > 128k ...passed 00:06:26.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.148 Test: blockdev comparev and writev ...[2024-11-04 02:17:13.211659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b880a000 len:0x1000 00:06:26.148 [2024-11-04 02:17:13.211702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:26.148 passed 00:06:26.148 Test: blockdev nvme passthru rw ...passed 00:06:26.148 Test: blockdev nvme passthru vendor specific ...[2024-11-04 02:17:13.212260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:26.148 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:26.148 [2024-11-04 02:17:13.212373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:26.148 passed 00:06:26.148 Test: blockdev copy ...passed 00:06:26.148 Suite: bdevio tests on: Nvme2n3 00:06:26.148 Test: blockdev write read block ...passed 00:06:26.148 Test: blockdev write zeroes read block ...passed 00:06:26.148 Test: blockdev write zeroes read no split ...passed 00:06:26.148 Test: blockdev write zeroes read split ...passed 00:06:26.148 Test: blockdev write zeroes read split partial ...passed 00:06:26.148 Test: blockdev reset ...[2024-11-04 02:17:13.253564] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:26.149 [2024-11-04 02:17:13.256601] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:26.149 passed 00:06:26.149 Test: blockdev write read 8 blocks ...passed 00:06:26.149 Test: blockdev write read size > 128k ...passed 00:06:26.149 Test: blockdev write read invalid size ...passed 00:06:26.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.407 Test: blockdev write read max offset ...passed 00:06:26.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.407 Test: blockdev writev readv 8 blocks ...passed 00:06:26.407 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.408 Test: blockdev writev readv block ...passed 00:06:26.408 Test: blockdev writev readv size > 128k ...passed 00:06:26.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.408 Test: blockdev comparev and writev ...[2024-11-04 02:17:13.263809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29c206000 len:0x1000 00:06:26.408 [2024-11-04 02:17:13.263846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:26.408 passed 00:06:26.408 Test: blockdev nvme passthru rw ...passed 00:06:26.408 Test: blockdev nvme passthru vendor specific ...passed 00:06:26.408 Test: blockdev nvme admin passthru ...[2024-11-04 02:17:13.264513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:26.408 [2024-11-04 02:17:13.264541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:26.408 passed 00:06:26.408 Test: blockdev copy ...passed 00:06:26.408 Suite: bdevio tests on: Nvme2n2 00:06:26.408 Test: blockdev write read block ...passed 00:06:26.408 Test: blockdev write zeroes read block ...passed 00:06:26.408 Test: blockdev write zeroes read no split ...passed 00:06:26.408 Test: blockdev write zeroes read split ...passed 00:06:26.408 Test: blockdev write zeroes read split partial ...passed 00:06:26.408 Test: blockdev reset ...[2024-11-04 02:17:13.322839] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:26.408 [2024-11-04 02:17:13.325715] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller passed 00:06:26.408 Test: blockdev write read 8 blocks ...successful. 00:06:26.408 passed 00:06:26.408 Test: blockdev write read size > 128k ...passed 00:06:26.408 Test: blockdev write read invalid size ...passed 00:06:26.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.408 Test: blockdev write read max offset ...passed 00:06:26.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.408 Test: blockdev writev readv 8 blocks ...passed 00:06:26.408 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.408 Test: blockdev writev readv block ...passed 00:06:26.408 Test: blockdev writev readv size > 128k ...passed 00:06:26.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.408 Test: blockdev comparev and writev ...[2024-11-04 02:17:13.332902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd23c000 len:0x1000 00:06:26.408 [2024-11-04 02:17:13.333115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:26.408 passed 00:06:26.408 Test: blockdev nvme passthru rw ...passed 00:06:26.408 Test: blockdev nvme passthru vendor specific ...[2024-11-04 02:17:13.333946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:26.408 [2024-11-04 02:17:13.334109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:26.408 passed 00:06:26.408 Test: blockdev nvme admin passthru ...passed 00:06:26.408 Test: blockdev copy ...passed 00:06:26.408 Suite: bdevio tests on: Nvme2n1 00:06:26.408 Test: blockdev write read block ...passed 00:06:26.408 Test: blockdev write zeroes read block ...passed 00:06:26.408 Test: blockdev write zeroes read no split ...passed 00:06:26.408 Test: blockdev write zeroes read split ...passed 00:06:26.408 Test: blockdev write zeroes read split partial ...passed 00:06:26.408 Test: blockdev reset ...[2024-11-04 02:17:13.390148] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:26.408 [2024-11-04 02:17:13.393026] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:26.408 passed 00:06:26.408 Test: blockdev write read 8 blocks ...passed 00:06:26.408 Test: blockdev write read size > 128k ...passed 00:06:26.408 Test: blockdev write read invalid size ...passed 00:06:26.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.408 Test: blockdev write read max offset ...passed 00:06:26.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.408 Test: blockdev writev readv 8 blocks ...passed 00:06:26.408 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.408 Test: blockdev writev readv block ...passed 00:06:26.408 Test: blockdev writev readv size > 128k ...passed 00:06:26.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.408 Test: blockdev comparev and writev ...[2024-11-04 02:17:13.399614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd238000 len:0x1000 00:06:26.408 [2024-11-04 02:17:13.399654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:26.408 passed 00:06:26.408 Test: blockdev nvme passthru rw ...passed 00:06:26.408 Test: blockdev nvme passthru vendor specific ...passed 00:06:26.408 Test: blockdev nvme admin passthru ...[2024-11-04 02:17:13.400228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:26.408 [2024-11-04 02:17:13.400253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:26.408 passed 00:06:26.408 Test: blockdev copy ...passed 00:06:26.408 Suite: bdevio tests on: Nvme1n1 00:06:26.408 Test: blockdev write read block ...passed 00:06:26.408 Test: blockdev write zeroes read block ...passed 00:06:26.408 Test: blockdev write zeroes read no split ...passed 00:06:26.408 Test: blockdev write zeroes read split ...passed 00:06:26.408 Test: blockdev write zeroes read split partial ...passed 00:06:26.408 Test: blockdev reset ...[2024-11-04 02:17:13.442320] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:26.408 [2024-11-04 02:17:13.444954] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:26.408 passed 00:06:26.408 Test: blockdev write read 8 blocks ...passed 00:06:26.408 Test: blockdev write read size > 128k ...passed 00:06:26.408 Test: blockdev write read invalid size ...passed 00:06:26.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.408 Test: blockdev write read max offset ...passed 00:06:26.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.408 Test: blockdev writev readv 8 blocks ...passed 00:06:26.408 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.408 Test: blockdev writev readv block ...passed 00:06:26.408 Test: blockdev writev readv size > 128k ...passed 00:06:26.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.408 Test: blockdev comparev and writev ...[2024-11-04 02:17:13.451579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd234000 len:0x1000 00:06:26.408 [2024-11-04 02:17:13.451765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:26.408 passed 00:06:26.408 Test: blockdev nvme passthru rw ...passed 00:06:26.408 Test: blockdev nvme passthru vendor specific ...[2024-11-04 02:17:13.452507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:26.408 [2024-11-04 02:17:13.452636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:26.408 passed 00:06:26.408 Test: blockdev nvme admin passthru ...passed 00:06:26.408 Test: blockdev copy ...passed 00:06:26.408 Suite: bdevio tests on: Nvme0n1 00:06:26.408 Test: blockdev write read block ...passed 00:06:26.408 Test: blockdev write zeroes read block ...passed 00:06:26.408 Test: blockdev write zeroes read no split ...passed 00:06:26.408 Test: blockdev write zeroes read split ...passed 00:06:26.408 Test: blockdev write zeroes read split partial ...passed 00:06:26.408 Test: blockdev reset ...[2024-11-04 02:17:13.507647] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:26.408 passed 00:06:26.408 Test: blockdev write read 8 blocks ...[2024-11-04 02:17:13.510036] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:26.408 passed 00:06:26.408 Test: blockdev write read size > 128k ...passed 00:06:26.408 Test: blockdev write read invalid size ...passed 00:06:26.408 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:26.408 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:26.408 Test: blockdev write read max offset ...passed 00:06:26.408 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:26.408 Test: blockdev writev readv 8 blocks ...passed 00:06:26.408 Test: blockdev writev readv 30 x 1block ...passed 00:06:26.408 Test: blockdev writev readv block ...passed 00:06:26.408 Test: blockdev writev readv size > 128k ...passed 00:06:26.408 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:26.408 Test: blockdev comparev and writev ...[2024-11-04 02:17:13.516006] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:26.408 separate metadata which is not supported yet. 00:06:26.408 passed 00:06:26.408 Test: blockdev nvme passthru rw ...passed 00:06:26.408 Test: blockdev nvme passthru vendor specific ...passed 00:06:26.408 Test: blockdev nvme admin passthru ...[2024-11-04 02:17:13.516558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:26.408 [2024-11-04 02:17:13.516597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:26.696 passed 00:06:26.696 Test: blockdev copy ...passed 00:06:26.696 00:06:26.696 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.696 suites 6 6 n/a 0 0 00:06:26.696 tests 138 138 138 0 0 00:06:26.696 asserts 893 893 893 0 n/a 00:06:26.696 00:06:26.696 Elapsed time = 0.951 seconds 00:06:26.696 0 00:06:26.696 02:17:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59882 00:06:26.696 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 59882 ']' 00:06:26.696 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 59882 00:06:26.696 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:06:26.696 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:26.696 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59882 00:06:26.696 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:26.696 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:26.696 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59882' 00:06:26.696 killing process with pid 59882 00:06:26.696 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 59882 00:06:26.696 02:17:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 59882 00:06:27.265 02:17:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:27.265 00:06:27.265 real 0m1.916s 00:06:27.265 user 0m4.924s 00:06:27.265 sys 0m0.257s 00:06:27.265 02:17:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.265 02:17:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:27.265 ************************************ 00:06:27.265 END TEST bdev_bounds 00:06:27.265 ************************************ 00:06:27.265 02:17:14 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:27.265 02:17:14 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:27.265 02:17:14 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.265 02:17:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.265 ************************************ 00:06:27.265 START TEST bdev_nbd 00:06:27.265 ************************************ 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59931 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59931 /var/tmp/spdk-nbd.sock 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 59931 ']' 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.265 02:17:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:27.265 [2024-11-04 02:17:14.214038] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:27.265 [2024-11-04 02:17:14.214161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.526 [2024-11-04 02:17:14.374193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.526 [2024-11-04 02:17:14.508109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:28.097 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.355 1+0 records in 00:06:28.355 1+0 records out 00:06:28.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101567 s, 4.0 MB/s 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:28.355 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.613 1+0 records in 00:06:28.613 1+0 records out 00:06:28.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468168 s, 8.7 MB/s 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:28.613 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.871 1+0 records in 00:06:28.871 1+0 records out 00:06:28.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464067 s, 8.8 MB/s 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:28.871 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.132 1+0 records in 00:06:29.132 1+0 records out 00:06:29.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00090279 s, 4.5 MB/s 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:29.132 02:17:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.132 1+0 records in 00:06:29.132 1+0 records out 00:06:29.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002804 s, 14.6 MB/s 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:29.132 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.392 1+0 records in 00:06:29.392 1+0 records out 00:06:29.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063488 s, 6.5 MB/s 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:29.392 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.650 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd0", 00:06:29.650 "bdev_name": "Nvme0n1" 00:06:29.650 }, 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd1", 00:06:29.650 "bdev_name": "Nvme1n1" 00:06:29.650 }, 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd2", 00:06:29.650 "bdev_name": "Nvme2n1" 00:06:29.650 }, 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd3", 00:06:29.650 "bdev_name": "Nvme2n2" 00:06:29.650 }, 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd4", 00:06:29.650 "bdev_name": "Nvme2n3" 00:06:29.650 }, 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd5", 00:06:29.650 "bdev_name": "Nvme3n1" 00:06:29.650 } 00:06:29.650 ]' 00:06:29.650 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:29.650 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd0", 00:06:29.650 "bdev_name": "Nvme0n1" 00:06:29.650 }, 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd1", 00:06:29.650 "bdev_name": "Nvme1n1" 00:06:29.650 }, 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd2", 00:06:29.650 "bdev_name": "Nvme2n1" 00:06:29.650 }, 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd3", 00:06:29.650 "bdev_name": "Nvme2n2" 00:06:29.650 }, 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd4", 00:06:29.650 "bdev_name": "Nvme2n3" 00:06:29.650 }, 00:06:29.650 { 00:06:29.650 "nbd_device": "/dev/nbd5", 00:06:29.650 "bdev_name": "Nvme3n1" 00:06:29.650 } 00:06:29.650 ]' 00:06:29.650 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:29.650 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:29.650 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.650 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:29.650 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.650 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:29.650 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.650 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.911 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.911 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.911 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.911 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.911 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.911 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.911 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.911 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.911 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.911 02:17:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.171 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:30.432 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:30.432 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:30.432 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:30.432 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.432 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.432 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:30.432 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.432 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.432 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.432 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:30.692 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:30.692 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:30.692 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:30.692 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.692 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.692 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:30.692 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.692 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.692 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.692 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:30.952 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:30.952 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:30.952 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:30.952 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.952 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.952 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:30.952 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.952 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.952 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.952 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.952 02:17:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:31.213 /dev/nbd0 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.213 1+0 records in 00:06:31.213 1+0 records out 00:06:31.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593683 s, 6.9 MB/s 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.213 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:31.214 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:31.214 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.214 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:31.214 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:31.474 /dev/nbd1 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.474 1+0 records in 00:06:31.474 1+0 records out 00:06:31.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385366 s, 10.6 MB/s 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:31.474 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:31.735 /dev/nbd10 00:06:31.735 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:31.735 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:31.735 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:06:31.735 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:31.735 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:31.735 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:31.735 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:06:31.735 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:31.735 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:31.736 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:31.736 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.736 1+0 records in 00:06:31.736 1+0 records out 00:06:31.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000982692 s, 4.2 MB/s 00:06:31.736 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.736 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:31.736 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.736 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:31.736 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:31.736 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.736 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:31.736 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:31.994 /dev/nbd11 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.994 1+0 records in 00:06:31.994 1+0 records out 00:06:31.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384408 s, 10.7 MB/s 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:31.994 02:17:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:32.252 /dev/nbd12 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.252 1+0 records in 00:06:32.252 1+0 records out 00:06:32.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494209 s, 8.3 MB/s 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:32.252 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:32.518 /dev/nbd13 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.518 1+0 records in 00:06:32.518 1+0 records out 00:06:32.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391084 s, 10.5 MB/s 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.518 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd0", 00:06:32.777 "bdev_name": "Nvme0n1" 00:06:32.777 }, 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd1", 00:06:32.777 "bdev_name": "Nvme1n1" 00:06:32.777 }, 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd10", 00:06:32.777 "bdev_name": "Nvme2n1" 00:06:32.777 }, 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd11", 00:06:32.777 "bdev_name": "Nvme2n2" 00:06:32.777 }, 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd12", 00:06:32.777 "bdev_name": "Nvme2n3" 00:06:32.777 }, 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd13", 00:06:32.777 "bdev_name": "Nvme3n1" 00:06:32.777 } 00:06:32.777 ]' 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd0", 00:06:32.777 "bdev_name": "Nvme0n1" 00:06:32.777 }, 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd1", 00:06:32.777 "bdev_name": "Nvme1n1" 00:06:32.777 }, 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd10", 00:06:32.777 "bdev_name": "Nvme2n1" 00:06:32.777 }, 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd11", 00:06:32.777 "bdev_name": "Nvme2n2" 00:06:32.777 }, 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd12", 00:06:32.777 "bdev_name": "Nvme2n3" 00:06:32.777 }, 00:06:32.777 { 00:06:32.777 "nbd_device": "/dev/nbd13", 00:06:32.777 "bdev_name": "Nvme3n1" 00:06:32.777 } 00:06:32.777 ]' 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.777 /dev/nbd1 00:06:32.777 /dev/nbd10 00:06:32.777 /dev/nbd11 00:06:32.777 /dev/nbd12 00:06:32.777 /dev/nbd13' 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.777 /dev/nbd1 00:06:32.777 /dev/nbd10 00:06:32.777 /dev/nbd11 00:06:32.777 /dev/nbd12 00:06:32.777 /dev/nbd13' 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:32.777 256+0 records in 00:06:32.777 256+0 records out 00:06:32.777 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00974662 s, 108 MB/s 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.777 256+0 records in 00:06:32.777 256+0 records out 00:06:32.777 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0647682 s, 16.2 MB/s 00:06:32.777 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.778 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.778 256+0 records in 00:06:32.778 256+0 records out 00:06:32.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0652921 s, 16.1 MB/s 00:06:32.778 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.778 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:33.035 256+0 records in 00:06:33.035 256+0 records out 00:06:33.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0654052 s, 16.0 MB/s 00:06:33.035 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.035 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:33.035 256+0 records in 00:06:33.035 256+0 records out 00:06:33.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0626615 s, 16.7 MB/s 00:06:33.035 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.035 02:17:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:33.035 256+0 records in 00:06:33.035 256+0 records out 00:06:33.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0638966 s, 16.4 MB/s 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:33.036 256+0 records in 00:06:33.036 256+0 records out 00:06:33.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0642439 s, 16.3 MB/s 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.036 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.294 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.551 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.551 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.551 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.551 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.551 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.551 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.551 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.551 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.551 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.551 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:33.809 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:33.809 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:33.809 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:33.809 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.809 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.809 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:33.809 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.809 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.809 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.809 02:17:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:34.068 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:34.068 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:34.068 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:34.068 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.068 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.068 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:34.068 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.068 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.068 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.068 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.326 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:34.585 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:34.844 malloc_lvol_verify 00:06:34.844 02:17:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:35.102 2e415263-f5bf-4a3d-9e78-ae22f2277242 00:06:35.102 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:35.359 a50d0af5-c2eb-4a41-a89d-b4b52a506e66 00:06:35.359 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:35.617 /dev/nbd0 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:35.617 mke2fs 1.47.0 (5-Feb-2023) 00:06:35.617 Discarding device blocks: 0/4096 done 00:06:35.617 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:35.617 00:06:35.617 Allocating group tables: 0/1 done 00:06:35.617 Writing inode tables: 0/1 done 00:06:35.617 Creating journal (1024 blocks): done 00:06:35.617 Writing superblocks and filesystem accounting information: 0/1 done 00:06:35.617 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59931 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 59931 ']' 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 59931 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:35.617 02:17:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59931 00:06:35.876 killing process with pid 59931 00:06:35.876 02:17:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:35.876 02:17:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:35.876 02:17:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59931' 00:06:35.876 02:17:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 59931 00:06:35.876 02:17:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 59931 00:06:36.442 02:17:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:36.442 00:06:36.442 real 0m9.361s 00:06:36.442 user 0m13.346s 00:06:36.442 sys 0m3.053s 00:06:36.442 02:17:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.442 ************************************ 00:06:36.442 END TEST bdev_nbd 00:06:36.442 ************************************ 00:06:36.442 02:17:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:36.442 02:17:23 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:36.442 02:17:23 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:36.442 skipping fio tests on NVMe due to multi-ns failures. 00:06:36.442 02:17:23 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:36.442 02:17:23 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:36.442 02:17:23 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:36.442 02:17:23 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:36.442 02:17:23 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.442 02:17:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:36.442 ************************************ 00:06:36.442 START TEST bdev_verify 00:06:36.442 ************************************ 00:06:36.442 02:17:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:36.707 [2024-11-04 02:17:23.610432] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:36.707 [2024-11-04 02:17:23.610536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60304 ] 00:06:36.707 [2024-11-04 02:17:23.766155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.966 [2024-11-04 02:17:23.846848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.966 [2024-11-04 02:17:23.846900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.531 Running I/O for 5 seconds... 00:06:39.872 24704.00 IOPS, 96.50 MiB/s [2024-11-04T02:17:27.548Z] 26016.00 IOPS, 101.62 MiB/s [2024-11-04T02:17:28.921Z] 25258.67 IOPS, 98.67 MiB/s [2024-11-04T02:17:29.485Z] 25552.00 IOPS, 99.81 MiB/s [2024-11-04T02:17:29.743Z] 25062.40 IOPS, 97.90 MiB/s 00:06:42.632 Latency(us) 00:06:42.632 [2024-11-04T02:17:29.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.632 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.632 Verification LBA range: start 0x0 length 0xbd0bd 00:06:42.632 Nvme0n1 : 5.07 2070.78 8.09 0.00 0.00 61699.79 10788.23 117763.15 00:06:42.632 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.632 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:42.632 Nvme0n1 : 5.08 2067.92 8.08 0.00 0.00 61451.20 15728.64 60898.07 00:06:42.632 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.632 Verification LBA range: start 0x0 length 0xa0000 00:06:42.632 Nvme1n1 : 5.07 2070.22 8.09 0.00 0.00 61634.09 9628.75 109697.18 00:06:42.632 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.632 Verification LBA range: start 0xa0000 length 0xa0000 00:06:42.633 Nvme1n1 : 5.08 2066.20 8.07 0.00 0.00 61384.45 12502.25 57671.68 00:06:42.633 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.633 Verification LBA range: start 0x0 length 0x80000 00:06:42.633 Nvme2n1 : 5.08 2067.93 8.08 0.00 0.00 61602.13 13308.85 102437.81 00:06:42.633 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.633 Verification LBA range: start 0x80000 length 0x80000 00:06:42.633 Nvme2n1 : 5.09 2063.97 8.06 0.00 0.00 61336.41 9729.58 60091.47 00:06:42.633 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.633 Verification LBA range: start 0x0 length 0x80000 00:06:42.633 Nvme2n2 : 5.08 2066.22 8.07 0.00 0.00 61540.98 12804.73 105664.20 00:06:42.633 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.633 Verification LBA range: start 0x80000 length 0x80000 00:06:42.633 Nvme2n2 : 5.09 2063.43 8.06 0.00 0.00 61302.80 7360.20 64527.75 00:06:42.633 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.633 Verification LBA range: start 0x0 length 0x80000 00:06:42.633 Nvme2n3 : 5.08 2064.68 8.07 0.00 0.00 61476.76 11947.72 113730.17 00:06:42.633 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.633 Verification LBA range: start 0x80000 length 0x80000 00:06:42.633 Nvme2n3 : 5.07 2070.23 8.09 0.00 0.00 61679.59 11947.72 65737.65 00:06:42.633 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:42.633 Verification LBA range: start 0x0 length 0x20000 00:06:42.633 Nvme3n1 : 5.08 2064.26 8.06 0.00 0.00 61386.26 6301.54 116956.55 00:06:42.633 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:42.633 Verification LBA range: start 0x20000 length 0x20000 00:06:42.633 Nvme3n1 : 5.07 2068.67 8.08 0.00 0.00 61563.32 14619.57 64124.46 00:06:42.633 [2024-11-04T02:17:29.744Z] =================================================================================================================== 00:06:42.633 [2024-11-04T02:17:29.744Z] Total : 24804.49 96.89 0.00 0.00 61504.81 6301.54 117763.15 00:06:44.004 00:06:44.004 real 0m7.186s 00:06:44.004 user 0m12.924s 00:06:44.004 sys 0m0.193s 00:06:44.004 02:17:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.004 ************************************ 00:06:44.004 END TEST bdev_verify 00:06:44.004 ************************************ 00:06:44.004 02:17:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:44.004 02:17:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:44.004 02:17:30 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:44.004 02:17:30 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.004 02:17:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.004 ************************************ 00:06:44.004 START TEST bdev_verify_big_io 00:06:44.004 ************************************ 00:06:44.004 02:17:30 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:44.004 [2024-11-04 02:17:30.838393] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:44.004 [2024-11-04 02:17:30.838508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60398 ] 00:06:44.004 [2024-11-04 02:17:30.994559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.004 [2024-11-04 02:17:31.076028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.004 [2024-11-04 02:17:31.076190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.569 Running I/O for 5 seconds... 00:06:49.767 608.00 IOPS, 38.00 MiB/s [2024-11-04T02:17:37.812Z] 2339.50 IOPS, 146.22 MiB/s [2024-11-04T02:17:38.070Z] 2291.00 IOPS, 143.19 MiB/s [2024-11-04T02:17:38.070Z] 2346.25 IOPS, 146.64 MiB/s 00:06:50.959 Latency(us) 00:06:50.959 [2024-11-04T02:17:38.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.959 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0x0 length 0xbd0b 00:06:50.959 Nvme0n1 : 5.93 86.34 5.40 0.00 0.00 1413905.28 8822.15 1561571.64 00:06:50.959 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:50.959 Nvme0n1 : 5.72 132.20 8.26 0.00 0.00 909629.01 14922.04 1129235.69 00:06:50.959 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0x0 length 0xa000 00:06:50.959 Nvme1n1 : 5.93 86.31 5.39 0.00 0.00 1344534.06 127442.31 1277649.53 00:06:50.959 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0xa000 length 0xa000 00:06:50.959 Nvme1n1 : 5.58 137.55 8.60 0.00 0.00 864602.06 103244.41 935652.43 00:06:50.959 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0x0 length 0x8000 00:06:50.959 Nvme2n1 : 6.06 95.03 5.94 0.00 0.00 1177684.55 30045.74 1129235.69 00:06:50.959 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0x8000 length 0x8000 00:06:50.959 Nvme2n1 : 5.76 144.37 9.02 0.00 0.00 802480.08 41338.09 796917.76 00:06:50.959 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0x0 length 0x8000 00:06:50.959 Nvme2n2 : 6.11 101.54 6.35 0.00 0.00 1048061.47 28029.24 1161499.57 00:06:50.959 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0x8000 length 0x8000 00:06:50.959 Nvme2n2 : 5.86 149.13 9.32 0.00 0.00 750929.05 49202.41 871124.68 00:06:50.959 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0x0 length 0x8000 00:06:50.959 Nvme2n3 : 6.21 127.31 7.96 0.00 0.00 809741.06 7965.14 1961643.72 00:06:50.959 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0x8000 length 0x8000 00:06:50.959 Nvme2n3 : 5.93 154.45 9.65 0.00 0.00 700451.28 67754.14 929199.66 00:06:50.959 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0x0 length 0x2000 00:06:50.959 Nvme3n1 : 6.32 186.59 11.66 0.00 0.00 530416.87 806.60 1987454.82 00:06:50.959 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.959 Verification LBA range: start 0x2000 length 0x2000 00:06:50.959 Nvme3n1 : 6.02 174.14 10.88 0.00 0.00 607481.26 617.55 1006632.96 00:06:50.959 [2024-11-04T02:17:38.070Z] =================================================================================================================== 00:06:50.959 [2024-11-04T02:17:38.070Z] Total : 1574.97 98.44 0.00 0.00 848245.66 617.55 1987454.82 00:06:52.859 00:06:52.859 real 0m8.675s 00:06:52.859 user 0m16.067s 00:06:52.859 sys 0m0.233s 00:06:52.859 02:17:39 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.859 ************************************ 00:06:52.859 END TEST bdev_verify_big_io 00:06:52.859 ************************************ 00:06:52.859 02:17:39 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:52.859 02:17:39 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:52.859 02:17:39 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:52.860 02:17:39 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.860 02:17:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.860 ************************************ 00:06:52.860 START TEST bdev_write_zeroes 00:06:52.860 ************************************ 00:06:52.860 02:17:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:52.860 [2024-11-04 02:17:39.558093] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:52.860 [2024-11-04 02:17:39.558209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60507 ] 00:06:52.860 [2024-11-04 02:17:39.718903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.860 [2024-11-04 02:17:39.824705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.425 Running I/O for 1 seconds... 00:06:54.357 74880.00 IOPS, 292.50 MiB/s 00:06:54.357 Latency(us) 00:06:54.357 [2024-11-04T02:17:41.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.357 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:54.357 Nvme0n1 : 1.02 12408.99 48.47 0.00 0.00 10295.32 8217.21 19963.27 00:06:54.357 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:54.357 Nvme1n1 : 1.02 12393.95 48.41 0.00 0.00 10293.25 8771.74 19459.15 00:06:54.357 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:54.357 Nvme2n1 : 1.02 12379.46 48.36 0.00 0.00 10281.91 8065.97 18854.20 00:06:54.357 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:54.357 Nvme2n2 : 1.02 12365.00 48.30 0.00 0.00 10279.95 8015.56 18450.90 00:06:54.357 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:54.357 Nvme2n3 : 1.03 12350.60 48.24 0.00 0.00 10261.14 6377.16 18450.90 00:06:54.357 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:54.357 Nvme3n1 : 1.03 12336.15 48.19 0.00 0.00 10257.08 6099.89 20064.10 00:06:54.357 [2024-11-04T02:17:41.468Z] =================================================================================================================== 00:06:54.357 [2024-11-04T02:17:41.468Z] Total : 74234.16 289.98 0.00 0.00 10278.11 6099.89 20064.10 00:06:55.323 00:06:55.323 real 0m2.676s 00:06:55.323 user 0m2.381s 00:06:55.323 sys 0m0.182s 00:06:55.323 02:17:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.323 02:17:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:55.323 ************************************ 00:06:55.323 END TEST bdev_write_zeroes 00:06:55.323 ************************************ 00:06:55.323 02:17:42 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.323 02:17:42 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:55.323 02:17:42 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.323 02:17:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.323 ************************************ 00:06:55.323 START TEST bdev_json_nonenclosed 00:06:55.323 ************************************ 00:06:55.323 02:17:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.323 [2024-11-04 02:17:42.271453] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:55.323 [2024-11-04 02:17:42.271582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60562 ] 00:06:55.323 [2024-11-04 02:17:42.423113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.582 [2024-11-04 02:17:42.524906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.582 [2024-11-04 02:17:42.524979] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:55.582 [2024-11-04 02:17:42.524995] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:55.582 [2024-11-04 02:17:42.525004] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.841 00:06:55.841 real 0m0.498s 00:06:55.841 user 0m0.299s 00:06:55.841 sys 0m0.096s 00:06:55.841 02:17:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.841 02:17:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 ************************************ 00:06:55.841 END TEST bdev_json_nonenclosed 00:06:55.841 ************************************ 00:06:55.841 02:17:42 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.841 02:17:42 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:55.841 02:17:42 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.841 02:17:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.841 ************************************ 00:06:55.841 START TEST bdev_json_nonarray 00:06:55.841 ************************************ 00:06:55.841 02:17:42 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.841 [2024-11-04 02:17:42.801752] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:55.841 [2024-11-04 02:17:42.801886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60582 ] 00:06:56.108 [2024-11-04 02:17:42.962329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.108 [2024-11-04 02:17:43.064041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.108 [2024-11-04 02:17:43.064127] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:56.108 [2024-11-04 02:17:43.064143] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:56.108 [2024-11-04 02:17:43.064153] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.377 00:06:56.377 real 0m0.504s 00:06:56.377 user 0m0.314s 00:06:56.377 sys 0m0.085s 00:06:56.377 ************************************ 00:06:56.377 02:17:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.377 02:17:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:56.377 END TEST bdev_json_nonarray 00:06:56.377 ************************************ 00:06:56.377 02:17:43 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:56.377 02:17:43 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:56.377 02:17:43 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:56.377 02:17:43 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:56.377 02:17:43 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:56.377 02:17:43 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:56.377 02:17:43 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.377 02:17:43 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:56.377 02:17:43 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:56.377 02:17:43 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:56.377 02:17:43 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:56.377 00:06:56.377 real 0m35.730s 00:06:56.377 user 0m54.614s 00:06:56.377 sys 0m4.954s 00:06:56.377 02:17:43 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.377 02:17:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.377 ************************************ 00:06:56.377 END TEST blockdev_nvme 00:06:56.377 ************************************ 00:06:56.377 02:17:43 -- spdk/autotest.sh@209 -- # uname -s 00:06:56.377 02:17:43 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:56.377 02:17:43 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:56.377 02:17:43 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:56.377 02:17:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.377 02:17:43 -- common/autotest_common.sh@10 -- # set +x 00:06:56.377 ************************************ 00:06:56.377 START TEST blockdev_nvme_gpt 00:06:56.377 ************************************ 00:06:56.377 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:56.377 * Looking for test storage... 00:06:56.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:56.377 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:56.377 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.378 02:17:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:56.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.378 --rc genhtml_branch_coverage=1 00:06:56.378 --rc genhtml_function_coverage=1 00:06:56.378 --rc genhtml_legend=1 00:06:56.378 --rc geninfo_all_blocks=1 00:06:56.378 --rc geninfo_unexecuted_blocks=1 00:06:56.378 00:06:56.378 ' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:56.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.378 --rc genhtml_branch_coverage=1 00:06:56.378 --rc genhtml_function_coverage=1 00:06:56.378 --rc genhtml_legend=1 00:06:56.378 --rc geninfo_all_blocks=1 00:06:56.378 --rc geninfo_unexecuted_blocks=1 00:06:56.378 00:06:56.378 ' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:56.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.378 --rc genhtml_branch_coverage=1 00:06:56.378 --rc genhtml_function_coverage=1 00:06:56.378 --rc genhtml_legend=1 00:06:56.378 --rc geninfo_all_blocks=1 00:06:56.378 --rc geninfo_unexecuted_blocks=1 00:06:56.378 00:06:56.378 ' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:56.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.378 --rc genhtml_branch_coverage=1 00:06:56.378 --rc genhtml_function_coverage=1 00:06:56.378 --rc genhtml_legend=1 00:06:56.378 --rc geninfo_all_blocks=1 00:06:56.378 --rc geninfo_unexecuted_blocks=1 00:06:56.378 00:06:56.378 ' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60666 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60666 00:06:56.378 02:17:43 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 60666 ']' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.378 02:17:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.638 [2024-11-04 02:17:43.545306] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:06:56.638 [2024-11-04 02:17:43.545425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60666 ] 00:06:56.638 [2024-11-04 02:17:43.705162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.896 [2024-11-04 02:17:43.806684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.462 02:17:44 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:57.462 02:17:44 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:06:57.462 02:17:44 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:57.462 02:17:44 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:57.462 02:17:44 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:57.721 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:57.721 Waiting for block devices as requested 00:06:57.979 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:57.979 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:57.979 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:57.979 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.245 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:03.245 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:03.245 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:03.245 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:03.245 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:03.245 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:03.246 BYT; 00:07:03.246 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:03.246 BYT; 00:07:03.246 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:03.246 02:17:50 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:03.246 02:17:50 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:04.180 The operation has completed successfully. 00:07:04.180 02:17:51 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:05.114 The operation has completed successfully. 00:07:05.114 02:17:52 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:05.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:05.967 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:05.967 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.233 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.233 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:06.233 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:06.233 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.233 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.233 [] 00:07:06.233 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.233 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:06.233 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:06.233 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:06.233 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:06.233 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:06.233 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.233 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.491 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.491 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:06.491 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.491 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.491 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.491 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:06.491 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:06.491 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.491 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.750 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.750 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:06.750 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:06.751 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "18df1f21-b931-4041-a80b-6d0fe540a048"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "18df1f21-b931-4041-a80b-6d0fe540a048",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6e27e217-69fe-41dd-b5cb-1394ab9328f8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6e27e217-69fe-41dd-b5cb-1394ab9328f8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "25ce714b-1d31-4236-9bcc-9a8e74532ebb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "25ce714b-1d31-4236-9bcc-9a8e74532ebb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "abb0cd10-88b4-4468-90cd-df329e9fc8ae"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "abb0cd10-88b4-4468-90cd-df329e9fc8ae",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5aff5499-0dc1-49b4-97ed-0a48f0b9a1a8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5aff5499-0dc1-49b4-97ed-0a48f0b9a1a8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:06.751 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:06.751 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:06.751 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:06.751 02:17:53 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60666 00:07:06.751 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 60666 ']' 00:07:06.751 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 60666 00:07:06.751 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:07:06.751 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:06.751 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60666 00:07:06.751 killing process with pid 60666 00:07:06.751 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:06.751 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:06.751 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60666' 00:07:06.751 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 60666 00:07:06.751 02:17:53 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 60666 00:07:08.126 02:17:54 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:08.126 02:17:54 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:08.126 02:17:54 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:07:08.126 02:17:54 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.126 02:17:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.126 ************************************ 00:07:08.126 START TEST bdev_hello_world 00:07:08.126 ************************************ 00:07:08.126 02:17:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:08.126 [2024-11-04 02:17:54.986769] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:08.126 [2024-11-04 02:17:54.986901] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61285 ] 00:07:08.126 [2024-11-04 02:17:55.141584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.126 [2024-11-04 02:17:55.222533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.693 [2024-11-04 02:17:55.713760] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:08.693 [2024-11-04 02:17:55.713803] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:08.693 [2024-11-04 02:17:55.713818] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:08.693 [2024-11-04 02:17:55.715793] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:08.693 [2024-11-04 02:17:55.716365] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:08.693 [2024-11-04 02:17:55.716390] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:08.693 [2024-11-04 02:17:55.716634] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:08.693 00:07:08.693 [2024-11-04 02:17:55.716659] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:09.259 00:07:09.259 real 0m1.344s 00:07:09.259 user 0m1.092s 00:07:09.259 sys 0m0.149s 00:07:09.259 02:17:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.259 02:17:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:09.259 ************************************ 00:07:09.259 END TEST bdev_hello_world 00:07:09.259 ************************************ 00:07:09.259 02:17:56 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:09.259 02:17:56 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:09.259 02:17:56 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.260 02:17:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.260 ************************************ 00:07:09.260 START TEST bdev_bounds 00:07:09.260 ************************************ 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61322 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.260 Process bdevio pid: 61322 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61322' 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61322 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61322 ']' 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:09.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:09.260 02:17:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:09.518 [2024-11-04 02:17:56.375714] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:09.518 [2024-11-04 02:17:56.375831] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61322 ] 00:07:09.518 [2024-11-04 02:17:56.530464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.518 [2024-11-04 02:17:56.615392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.518 [2024-11-04 02:17:56.615521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.518 [2024-11-04 02:17:56.615535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.084 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.085 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:07:10.085 02:17:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:10.343 I/O targets: 00:07:10.343 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:10.343 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:10.343 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:10.343 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.343 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.343 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:10.343 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:10.343 00:07:10.343 00:07:10.343 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.343 http://cunit.sourceforge.net/ 00:07:10.343 00:07:10.343 00:07:10.343 Suite: bdevio tests on: Nvme3n1 00:07:10.343 Test: blockdev write read block ...passed 00:07:10.343 Test: blockdev write zeroes read block ...passed 00:07:10.343 Test: blockdev write zeroes read no split ...passed 00:07:10.343 Test: blockdev write zeroes read split ...passed 00:07:10.343 Test: blockdev write zeroes read split partial ...passed 00:07:10.343 Test: blockdev reset ...[2024-11-04 02:17:57.304942] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:10.343 [2024-11-04 02:17:57.307705] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:10.343 passed 00:07:10.343 Test: blockdev write read 8 blocks ...passed 00:07:10.343 Test: blockdev write read size > 128k ...passed 00:07:10.343 Test: blockdev write read invalid size ...passed 00:07:10.343 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.343 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.343 Test: blockdev write read max offset ...passed 00:07:10.343 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.343 Test: blockdev writev readv 8 blocks ...passed 00:07:10.343 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.343 Test: blockdev writev readv block ...passed 00:07:10.343 Test: blockdev writev readv size > 128k ...passed 00:07:10.343 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.343 Test: blockdev comparev and writev ...[2024-11-04 02:17:57.315379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc604000 len:0x1000 00:07:10.343 [2024-11-04 02:17:57.315515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.343 passed 00:07:10.343 Test: blockdev nvme passthru rw ...passed 00:07:10.343 Test: blockdev nvme passthru vendor specific ...[2024-11-04 02:17:57.316364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.343 [2024-11-04 02:17:57.316396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.344 passed 00:07:10.344 Test: blockdev nvme admin passthru ...passed 00:07:10.344 Test: blockdev copy ...passed 00:07:10.344 Suite: bdevio tests on: Nvme2n3 00:07:10.344 Test: blockdev write read block ...passed 00:07:10.344 Test: blockdev write zeroes read block ...passed 00:07:10.344 Test: blockdev write zeroes read no split ...passed 00:07:10.344 Test: blockdev write zeroes read split ...passed 00:07:10.344 Test: blockdev write zeroes read split partial ...passed 00:07:10.344 Test: blockdev reset ...[2024-11-04 02:17:57.371493] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.344 [2024-11-04 02:17:57.375128] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.344 passed 00:07:10.344 Test: blockdev write read 8 blocks ...passed 00:07:10.344 Test: blockdev write read size > 128k ...passed 00:07:10.344 Test: blockdev write read invalid size ...passed 00:07:10.344 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.344 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.344 Test: blockdev write read max offset ...passed 00:07:10.344 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.344 Test: blockdev writev readv 8 blocks ...passed 00:07:10.344 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.344 Test: blockdev writev readv block ...passed 00:07:10.344 Test: blockdev writev readv size > 128k ...passed 00:07:10.344 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.344 Test: blockdev comparev and writev ...[2024-11-04 02:17:57.383139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc602000 len:0x1000 00:07:10.344 [2024-11-04 02:17:57.383178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.344 passed 00:07:10.344 Test: blockdev nvme passthru rw ...passed 00:07:10.344 Test: blockdev nvme passthru vendor specific ...[2024-11-04 02:17:57.384023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:10.344 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:10.344 [2024-11-04 02:17:57.384135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.344 passed 00:07:10.344 Test: blockdev copy ...passed 00:07:10.344 Suite: bdevio tests on: Nvme2n2 00:07:10.344 Test: blockdev write read block ...passed 00:07:10.344 Test: blockdev write zeroes read block ...passed 00:07:10.344 Test: blockdev write zeroes read no split ...passed 00:07:10.344 Test: blockdev write zeroes read split ...passed 00:07:10.344 Test: blockdev write zeroes read split partial ...passed 00:07:10.344 Test: blockdev reset ...[2024-11-04 02:17:57.438860] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.344 [2024-11-04 02:17:57.442025] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.344 passed 00:07:10.344 Test: blockdev write read 8 blocks ...passed 00:07:10.344 Test: blockdev write read size > 128k ...passed 00:07:10.344 Test: blockdev write read invalid size ...passed 00:07:10.344 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.344 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.344 Test: blockdev write read max offset ...passed 00:07:10.344 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.344 Test: blockdev writev readv 8 blocks ...passed 00:07:10.344 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.344 Test: blockdev writev readv block ...passed 00:07:10.344 Test: blockdev writev readv size > 128k ...passed 00:07:10.344 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.344 Test: blockdev comparev and writev ...[2024-11-04 02:17:57.450266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dbc38000 len:0x1000 00:07:10.344 [2024-11-04 02:17:57.450302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.344 passed 00:07:10.344 Test: blockdev nvme passthru rw ...passed 00:07:10.344 Test: blockdev nvme passthru vendor specific ...[2024-11-04 02:17:57.451445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:10.344 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:10.344 [2024-11-04 02:17:57.451851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.603 passed 00:07:10.603 Test: blockdev copy ...passed 00:07:10.603 Suite: bdevio tests on: Nvme2n1 00:07:10.603 Test: blockdev write read block ...passed 00:07:10.603 Test: blockdev write zeroes read block ...passed 00:07:10.603 Test: blockdev write zeroes read no split ...passed 00:07:10.603 Test: blockdev write zeroes read split ...passed 00:07:10.603 Test: blockdev write zeroes read split partial ...passed 00:07:10.603 Test: blockdev reset ...[2024-11-04 02:17:57.506925] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:10.603 [2024-11-04 02:17:57.510410] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:10.603 passed 00:07:10.603 Test: blockdev write read 8 blocks ...passed 00:07:10.603 Test: blockdev write read size > 128k ...passed 00:07:10.603 Test: blockdev write read invalid size ...passed 00:07:10.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.603 Test: blockdev write read max offset ...passed 00:07:10.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.603 Test: blockdev writev readv 8 blocks ...passed 00:07:10.603 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.603 Test: blockdev writev readv block ...passed 00:07:10.603 Test: blockdev writev readv size > 128k ...passed 00:07:10.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.603 Test: blockdev comparev and writev ...[2024-11-04 02:17:57.519989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dbc34000 len:0x1000 00:07:10.603 [2024-11-04 02:17:57.520112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.603 passed 00:07:10.603 Test: blockdev nvme passthru rw ...passed 00:07:10.603 Test: blockdev nvme passthru vendor specific ...passed 00:07:10.603 Test: blockdev nvme admin passthru ...[2024-11-04 02:17:57.521045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.603 [2024-11-04 02:17:57.521123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.603 passed 00:07:10.603 Test: blockdev copy ...passed 00:07:10.603 Suite: bdevio tests on: Nvme1n1p2 00:07:10.603 Test: blockdev write read block ...passed 00:07:10.603 Test: blockdev write zeroes read block ...passed 00:07:10.603 Test: blockdev write zeroes read no split ...passed 00:07:10.603 Test: blockdev write zeroes read split ...passed 00:07:10.603 Test: blockdev write zeroes read split partial ...passed 00:07:10.603 Test: blockdev reset ...[2024-11-04 02:17:57.575328] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:10.603 [2024-11-04 02:17:57.578062] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:10.603 passed 00:07:10.603 Test: blockdev write read 8 blocks ...passed 00:07:10.603 Test: blockdev write read size > 128k ...passed 00:07:10.603 Test: blockdev write read invalid size ...passed 00:07:10.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.603 Test: blockdev write read max offset ...passed 00:07:10.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.603 Test: blockdev writev readv 8 blocks ...passed 00:07:10.603 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.603 Test: blockdev writev readv block ...passed 00:07:10.603 Test: blockdev writev readv size > 128k ...passed 00:07:10.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.603 Test: blockdev comparev and writev ...[2024-11-04 02:17:57.586568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:07:10.603 Test: blockdev nvme passthru rw ...passed 00:07:10.603 Test: blockdev nvme passthru vendor specific ...passed 00:07:10.603 Test: blockdev nvme admin passthru ...passed 00:07:10.603 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2dbc30000 len:0x1000 00:07:10.603 [2024-11-04 02:17:57.586707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.603 passed 00:07:10.603 Suite: bdevio tests on: Nvme1n1p1 00:07:10.603 Test: blockdev write read block ...passed 00:07:10.603 Test: blockdev write zeroes read block ...passed 00:07:10.603 Test: blockdev write zeroes read no split ...passed 00:07:10.603 Test: blockdev write zeroes read split ...passed 00:07:10.603 Test: blockdev write zeroes read split partial ...passed 00:07:10.603 Test: blockdev reset ...[2024-11-04 02:17:57.631568] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:10.603 [2024-11-04 02:17:57.634209] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller passed 00:07:10.603 Test: blockdev write read 8 blocks ...successful. 00:07:10.603 passed 00:07:10.603 Test: blockdev write read size > 128k ...passed 00:07:10.603 Test: blockdev write read invalid size ...passed 00:07:10.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.603 Test: blockdev write read max offset ...passed 00:07:10.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.603 Test: blockdev writev readv 8 blocks ...passed 00:07:10.603 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.603 Test: blockdev writev readv block ...passed 00:07:10.603 Test: blockdev writev readv size > 128k ...passed 00:07:10.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.603 Test: blockdev comparev and writev ...[2024-11-04 02:17:57.642719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bc80e000 len:0x1000 00:07:10.603 [2024-11-04 02:17:57.642857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.603 passed 00:07:10.603 Test: blockdev nvme passthru rw ...passed 00:07:10.603 Test: blockdev nvme passthru vendor specific ...passed 00:07:10.603 Test: blockdev nvme admin passthru ...passed 00:07:10.603 Test: blockdev copy ...passed 00:07:10.603 Suite: bdevio tests on: Nvme0n1 00:07:10.603 Test: blockdev write read block ...passed 00:07:10.603 Test: blockdev write zeroes read block ...passed 00:07:10.603 Test: blockdev write zeroes read no split ...passed 00:07:10.603 Test: blockdev write zeroes read split ...passed 00:07:10.603 Test: blockdev write zeroes read split partial ...passed 00:07:10.603 Test: blockdev reset ...[2024-11-04 02:17:57.687631] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:10.603 passed 00:07:10.603 Test: blockdev write read 8 blocks ...[2024-11-04 02:17:57.690163] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:10.603 passed 00:07:10.603 Test: blockdev write read size > 128k ...passed 00:07:10.603 Test: blockdev write read invalid size ...passed 00:07:10.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.603 Test: blockdev write read max offset ...passed 00:07:10.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.603 Test: blockdev writev readv 8 blocks ...passed 00:07:10.603 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.603 Test: blockdev writev readv block ...passed 00:07:10.603 Test: blockdev writev readv size > 128k ...passed 00:07:10.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.603 Test: blockdev comparev and writev ...[2024-11-04 02:17:57.696608] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:10.603 separate metadata which is not supported yet. 00:07:10.603 passed 00:07:10.603 Test: blockdev nvme passthru rw ...passed 00:07:10.603 Test: blockdev nvme passthru vendor specific ...[2024-11-04 02:17:57.697381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:07:10.603 Test: blockdev nvme admin passthru ...passed 00:07:10.603 Test: blockdev copy ...RP2 0x0 00:07:10.603 [2024-11-04 02:17:57.697498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:10.603 passed 00:07:10.603 00:07:10.603 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.603 suites 7 7 n/a 0 0 00:07:10.603 tests 161 161 161 0 0 00:07:10.603 asserts 1025 1025 1025 0 n/a 00:07:10.603 00:07:10.603 Elapsed time = 1.154 seconds 00:07:10.603 0 00:07:10.863 02:17:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61322 00:07:10.863 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61322 ']' 00:07:10.863 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61322 00:07:10.863 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:07:10.863 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:10.863 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61322 00:07:10.863 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:10.863 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:10.863 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61322' 00:07:10.863 killing process with pid 61322 00:07:10.863 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61322 00:07:10.863 02:17:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61322 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:11.430 00:07:11.430 real 0m1.957s 00:07:11.430 user 0m4.978s 00:07:11.430 sys 0m0.259s 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:11.430 ************************************ 00:07:11.430 END TEST bdev_bounds 00:07:11.430 ************************************ 00:07:11.430 02:17:58 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:11.430 02:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:11.430 02:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.430 02:17:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.430 ************************************ 00:07:11.430 START TEST bdev_nbd 00:07:11.430 ************************************ 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61376 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61376 /var/tmp/spdk-nbd.sock 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61376 ']' 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:11.430 02:17:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:11.430 [2024-11-04 02:17:58.386380] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:11.430 [2024-11-04 02:17:58.386631] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.688 [2024-11-04 02:17:58.543306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.688 [2024-11-04 02:17:58.625655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:12.255 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.513 1+0 records in 00:07:12.513 1+0 records out 00:07:12.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558218 s, 7.3 MB/s 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:12.513 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.772 1+0 records in 00:07:12.772 1+0 records out 00:07:12.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366353 s, 11.2 MB/s 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:12.772 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.031 1+0 records in 00:07:13.031 1+0 records out 00:07:13.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407749 s, 10.0 MB/s 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.031 02:17:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.031 1+0 records in 00:07:13.031 1+0 records out 00:07:13.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322612 s, 12.7 MB/s 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.031 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.289 1+0 records in 00:07:13.289 1+0 records out 00:07:13.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593867 s, 6.9 MB/s 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:13.289 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.290 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.290 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.548 1+0 records in 00:07:13.548 1+0 records out 00:07:13.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321831 s, 12.7 MB/s 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.548 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.807 1+0 records in 00:07:13.807 1+0 records out 00:07:13.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504468 s, 8.1 MB/s 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.807 02:18:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.066 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd0", 00:07:14.066 "bdev_name": "Nvme0n1" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd1", 00:07:14.066 "bdev_name": "Nvme1n1p1" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd2", 00:07:14.066 "bdev_name": "Nvme1n1p2" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd3", 00:07:14.066 "bdev_name": "Nvme2n1" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd4", 00:07:14.066 "bdev_name": "Nvme2n2" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd5", 00:07:14.066 "bdev_name": "Nvme2n3" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd6", 00:07:14.066 "bdev_name": "Nvme3n1" 00:07:14.066 } 00:07:14.066 ]' 00:07:14.066 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:14.066 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd0", 00:07:14.066 "bdev_name": "Nvme0n1" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd1", 00:07:14.066 "bdev_name": "Nvme1n1p1" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd2", 00:07:14.066 "bdev_name": "Nvme1n1p2" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd3", 00:07:14.066 "bdev_name": "Nvme2n1" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd4", 00:07:14.066 "bdev_name": "Nvme2n2" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd5", 00:07:14.066 "bdev_name": "Nvme2n3" 00:07:14.066 }, 00:07:14.066 { 00:07:14.066 "nbd_device": "/dev/nbd6", 00:07:14.066 "bdev_name": "Nvme3n1" 00:07:14.066 } 00:07:14.066 ]' 00:07:14.066 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:14.066 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:14.066 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.066 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:14.066 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.066 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.066 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.066 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.325 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.325 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.325 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.325 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.325 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.325 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.325 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.325 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.325 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.325 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.583 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:14.841 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:14.841 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:14.841 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:14.841 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.841 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.841 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:14.841 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.841 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.841 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.841 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:15.099 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:15.099 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:15.099 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:15.099 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.099 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.099 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:15.099 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.099 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.099 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.099 02:18:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:15.099 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:15.099 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:15.099 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:15.099 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.099 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.099 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:15.099 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.099 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.099 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.099 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.358 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:15.617 /dev/nbd0 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.617 1+0 records in 00:07:15.617 1+0 records out 00:07:15.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467427 s, 8.8 MB/s 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.617 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:15.876 /dev/nbd1 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.876 1+0 records in 00:07:15.876 1+0 records out 00:07:15.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501154 s, 8.2 MB/s 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.876 02:18:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:16.134 /dev/nbd10 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.134 1+0 records in 00:07:16.134 1+0 records out 00:07:16.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363214 s, 11.3 MB/s 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.134 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:16.392 /dev/nbd11 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.393 1+0 records in 00:07:16.393 1+0 records out 00:07:16.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427641 s, 9.6 MB/s 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.393 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:16.651 /dev/nbd12 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.651 1+0 records in 00:07:16.651 1+0 records out 00:07:16.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355033 s, 11.5 MB/s 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.651 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:16.912 /dev/nbd13 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.912 1+0 records in 00:07:16.912 1+0 records out 00:07:16.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370642 s, 11.1 MB/s 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.912 02:18:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:17.169 /dev/nbd14 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.169 1+0 records in 00:07:17.169 1+0 records out 00:07:17.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454243 s, 9.0 MB/s 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.169 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.169 { 00:07:17.169 "nbd_device": "/dev/nbd0", 00:07:17.169 "bdev_name": "Nvme0n1" 00:07:17.169 }, 00:07:17.169 { 00:07:17.169 "nbd_device": "/dev/nbd1", 00:07:17.169 "bdev_name": "Nvme1n1p1" 00:07:17.169 }, 00:07:17.169 { 00:07:17.170 "nbd_device": "/dev/nbd10", 00:07:17.170 "bdev_name": "Nvme1n1p2" 00:07:17.170 }, 00:07:17.170 { 00:07:17.170 "nbd_device": "/dev/nbd11", 00:07:17.170 "bdev_name": "Nvme2n1" 00:07:17.170 }, 00:07:17.170 { 00:07:17.170 "nbd_device": "/dev/nbd12", 00:07:17.170 "bdev_name": "Nvme2n2" 00:07:17.170 }, 00:07:17.170 { 00:07:17.170 "nbd_device": "/dev/nbd13", 00:07:17.170 "bdev_name": "Nvme2n3" 00:07:17.170 }, 00:07:17.170 { 00:07:17.170 "nbd_device": "/dev/nbd14", 00:07:17.170 "bdev_name": "Nvme3n1" 00:07:17.170 } 00:07:17.170 ]' 00:07:17.170 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.170 { 00:07:17.170 "nbd_device": "/dev/nbd0", 00:07:17.170 "bdev_name": "Nvme0n1" 00:07:17.170 }, 00:07:17.170 { 00:07:17.170 "nbd_device": "/dev/nbd1", 00:07:17.170 "bdev_name": "Nvme1n1p1" 00:07:17.170 }, 00:07:17.170 { 00:07:17.170 "nbd_device": "/dev/nbd10", 00:07:17.170 "bdev_name": "Nvme1n1p2" 00:07:17.170 }, 00:07:17.170 { 00:07:17.170 "nbd_device": "/dev/nbd11", 00:07:17.170 "bdev_name": "Nvme2n1" 00:07:17.170 }, 00:07:17.170 { 00:07:17.170 "nbd_device": "/dev/nbd12", 00:07:17.170 "bdev_name": "Nvme2n2" 00:07:17.170 }, 00:07:17.170 { 00:07:17.170 "nbd_device": "/dev/nbd13", 00:07:17.170 "bdev_name": "Nvme2n3" 00:07:17.170 }, 00:07:17.170 { 00:07:17.170 "nbd_device": "/dev/nbd14", 00:07:17.170 "bdev_name": "Nvme3n1" 00:07:17.170 } 00:07:17.170 ]' 00:07:17.170 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.428 /dev/nbd1 00:07:17.428 /dev/nbd10 00:07:17.428 /dev/nbd11 00:07:17.428 /dev/nbd12 00:07:17.428 /dev/nbd13 00:07:17.428 /dev/nbd14' 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.428 /dev/nbd1 00:07:17.428 /dev/nbd10 00:07:17.428 /dev/nbd11 00:07:17.428 /dev/nbd12 00:07:17.428 /dev/nbd13 00:07:17.428 /dev/nbd14' 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:17.428 256+0 records in 00:07:17.428 256+0 records out 00:07:17.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00902295 s, 116 MB/s 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:17.428 256+0 records in 00:07:17.428 256+0 records out 00:07:17.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0672635 s, 15.6 MB/s 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:17.428 256+0 records in 00:07:17.428 256+0 records out 00:07:17.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0666824 s, 15.7 MB/s 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:17.428 256+0 records in 00:07:17.428 256+0 records out 00:07:17.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0704807 s, 14.9 MB/s 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.428 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:17.686 256+0 records in 00:07:17.686 256+0 records out 00:07:17.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0667351 s, 15.7 MB/s 00:07:17.686 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.686 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:17.686 256+0 records in 00:07:17.686 256+0 records out 00:07:17.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0625049 s, 16.8 MB/s 00:07:17.686 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.686 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:17.686 256+0 records in 00:07:17.686 256+0 records out 00:07:17.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0800944 s, 13.1 MB/s 00:07:17.686 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.686 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:17.945 256+0 records in 00:07:17.945 256+0 records out 00:07:17.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0874848 s, 12.0 MB/s 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.945 02:18:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.203 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:18.461 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:18.461 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:18.461 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:18.461 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.461 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.461 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:18.461 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.461 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.461 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.461 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:18.720 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:18.720 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:18.720 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:18.720 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.720 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.720 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:18.720 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.720 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.720 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.720 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:18.996 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:18.996 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:18.996 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:18.996 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.996 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.996 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:18.996 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.996 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.996 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.996 02:18:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:18.996 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:18.996 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:18.996 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:18.996 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.996 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.996 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:18.996 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.996 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.996 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.996 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:19.254 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:19.254 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:19.254 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:19.254 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.254 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.254 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:19.254 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.254 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.254 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.254 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.254 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:19.512 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:19.769 malloc_lvol_verify 00:07:19.769 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:20.027 7a8e7f6c-a5ec-4b9b-b7ca-812cc0c02ffd 00:07:20.027 02:18:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:20.027 d21e9aaf-169d-4281-beb2-dd08aa8cbc41 00:07:20.027 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:20.285 /dev/nbd0 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:20.285 mke2fs 1.47.0 (5-Feb-2023) 00:07:20.285 Discarding device blocks: 0/4096 done 00:07:20.285 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:20.285 00:07:20.285 Allocating group tables: 0/1 done 00:07:20.285 Writing inode tables: 0/1 done 00:07:20.285 Creating journal (1024 blocks): done 00:07:20.285 Writing superblocks and filesystem accounting information: 0/1 done 00:07:20.285 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.285 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61376 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61376 ']' 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61376 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61376 00:07:20.543 killing process with pid 61376 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61376' 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61376 00:07:20.543 02:18:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61376 00:07:21.109 02:18:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:21.109 00:07:21.109 real 0m9.860s 00:07:21.109 user 0m14.182s 00:07:21.109 sys 0m3.236s 00:07:21.109 02:18:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.109 02:18:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:21.109 ************************************ 00:07:21.109 END TEST bdev_nbd 00:07:21.109 ************************************ 00:07:21.109 02:18:08 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:21.109 02:18:08 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:21.110 skipping fio tests on NVMe due to multi-ns failures. 00:07:21.110 02:18:08 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:21.110 02:18:08 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:21.110 02:18:08 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:21.110 02:18:08 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:21.110 02:18:08 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:21.110 02:18:08 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.110 02:18:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.110 ************************************ 00:07:21.110 START TEST bdev_verify 00:07:21.110 ************************************ 00:07:21.110 02:18:08 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:21.368 [2024-11-04 02:18:08.277861] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:21.368 [2024-11-04 02:18:08.277980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61778 ] 00:07:21.368 [2024-11-04 02:18:08.433272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.626 [2024-11-04 02:18:08.512001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.626 [2024-11-04 02:18:08.512142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.218 Running I/O for 5 seconds... 00:07:24.521 24704.00 IOPS, 96.50 MiB/s [2024-11-04T02:18:12.564Z] 24512.00 IOPS, 95.75 MiB/s [2024-11-04T02:18:13.496Z] 23808.00 IOPS, 93.00 MiB/s [2024-11-04T02:18:14.430Z] 23824.00 IOPS, 93.06 MiB/s [2024-11-04T02:18:14.430Z] 23897.60 IOPS, 93.35 MiB/s 00:07:27.319 Latency(us) 00:07:27.319 [2024-11-04T02:18:14.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.319 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.319 Verification LBA range: start 0x0 length 0xbd0bd 00:07:27.319 Nvme0n1 : 5.08 1737.66 6.79 0.00 0.00 73518.49 14014.62 83079.48 00:07:27.319 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.319 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:27.319 Nvme0n1 : 5.08 1636.31 6.39 0.00 0.00 78044.44 16434.41 74610.22 00:07:27.319 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.319 Verification LBA range: start 0x0 length 0x4ff80 00:07:27.319 Nvme1n1p1 : 5.08 1737.07 6.79 0.00 0.00 73441.95 14115.45 80659.69 00:07:27.319 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.319 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:27.319 Nvme1n1p1 : 5.09 1635.08 6.39 0.00 0.00 77931.02 17745.13 68157.44 00:07:27.319 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.319 Verification LBA range: start 0x0 length 0x4ff7f 00:07:27.319 Nvme1n1p2 : 5.09 1735.10 6.78 0.00 0.00 73389.28 17845.96 76223.41 00:07:27.319 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.319 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:27.319 Nvme1n1p2 : 5.09 1634.55 6.38 0.00 0.00 77815.92 18652.55 65737.65 00:07:27.319 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.319 Verification LBA range: start 0x0 length 0x80000 00:07:27.319 Nvme2n1 : 5.09 1733.72 6.77 0.00 0.00 73304.73 17946.78 76626.71 00:07:27.319 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.319 Verification LBA range: start 0x80000 length 0x80000 00:07:27.319 Nvme2n1 : 5.09 1633.28 6.38 0.00 0.00 77725.49 21273.99 66947.54 00:07:27.319 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.319 Verification LBA range: start 0x0 length 0x80000 00:07:27.320 Nvme2n2 : 5.10 1732.54 6.77 0.00 0.00 73202.21 17140.18 79046.50 00:07:27.320 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.320 Verification LBA range: start 0x80000 length 0x80000 00:07:27.320 Nvme2n2 : 5.10 1632.18 6.38 0.00 0.00 77605.97 20467.40 70173.93 00:07:27.320 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.320 Verification LBA range: start 0x0 length 0x80000 00:07:27.320 Nvme2n3 : 5.10 1731.39 6.76 0.00 0.00 73116.36 13712.15 81062.99 00:07:27.320 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.320 Verification LBA range: start 0x80000 length 0x80000 00:07:27.320 Nvme2n3 : 5.10 1631.13 6.37 0.00 0.00 77478.97 14317.10 72190.42 00:07:27.320 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.320 Verification LBA range: start 0x0 length 0x20000 00:07:27.320 Nvme3n1 : 5.10 1730.38 6.76 0.00 0.00 73051.52 9527.93 82676.18 00:07:27.320 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.320 Verification LBA range: start 0x20000 length 0x20000 00:07:27.320 Nvme3n1 : 5.10 1630.20 6.37 0.00 0.00 77422.71 11443.59 73803.62 00:07:27.320 [2024-11-04T02:18:14.431Z] =================================================================================================================== 00:07:27.320 [2024-11-04T02:18:14.431Z] Total : 23570.59 92.07 0.00 0.00 75437.41 9527.93 83079.48 00:07:28.696 00:07:28.696 real 0m7.223s 00:07:28.696 user 0m13.586s 00:07:28.696 sys 0m0.189s 00:07:28.696 02:18:15 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.696 02:18:15 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:28.696 ************************************ 00:07:28.696 END TEST bdev_verify 00:07:28.696 ************************************ 00:07:28.696 02:18:15 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:28.696 02:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:28.696 02:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.696 02:18:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.696 ************************************ 00:07:28.696 START TEST bdev_verify_big_io 00:07:28.696 ************************************ 00:07:28.696 02:18:15 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:28.696 [2024-11-04 02:18:15.547762] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:28.696 [2024-11-04 02:18:15.547898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61876 ] 00:07:28.696 [2024-11-04 02:18:15.707031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.986 [2024-11-04 02:18:15.821570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.986 [2024-11-04 02:18:15.821631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.553 Running I/O for 5 seconds... 00:07:35.376 2215.00 IOPS, 138.44 MiB/s [2024-11-04T02:18:22.745Z] 2833.50 IOPS, 177.09 MiB/s [2024-11-04T02:18:23.678Z] 2798.00 IOPS, 174.88 MiB/s 00:07:36.567 Latency(us) 00:07:36.567 [2024-11-04T02:18:23.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.567 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x0 length 0xbd0b 00:07:36.567 Nvme0n1 : 6.02 65.41 4.09 0.00 0.00 1829434.24 13006.38 2077793.67 00:07:36.567 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:36.567 Nvme0n1 : 5.74 130.05 8.13 0.00 0.00 901869.30 99211.42 1258291.20 00:07:36.567 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x0 length 0x4ff8 00:07:36.567 Nvme1n1p1 : 6.03 74.34 4.65 0.00 0.00 1549555.96 29440.79 1729343.80 00:07:36.567 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:36.567 Nvme1n1p1 : 5.86 136.18 8.51 0.00 0.00 836552.95 66140.95 1045349.61 00:07:36.567 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x0 length 0x4ff7 00:07:36.567 Nvme1n1p2 : 6.10 79.30 4.96 0.00 0.00 1365273.30 25004.50 1548666.09 00:07:36.567 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:36.567 Nvme1n1p2 : 5.86 141.93 8.87 0.00 0.00 787326.70 46782.62 1032444.06 00:07:36.567 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x0 length 0x8000 00:07:36.567 Nvme2n1 : 6.16 93.35 5.83 0.00 0.00 1115830.40 17644.31 1503496.66 00:07:36.567 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x8000 length 0x8000 00:07:36.567 Nvme2n1 : 5.92 151.23 9.45 0.00 0.00 720854.36 35086.97 1051802.39 00:07:36.567 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x0 length 0x8000 00:07:36.567 Nvme2n2 : 6.30 121.88 7.62 0.00 0.00 817856.33 22584.71 1542213.32 00:07:36.567 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x8000 length 0x8000 00:07:36.567 Nvme2n2 : 5.99 170.94 10.68 0.00 0.00 622263.88 850.71 1071160.71 00:07:36.567 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x0 length 0x8000 00:07:36.567 Nvme2n3 : 6.55 195.27 12.20 0.00 0.00 488856.58 11443.59 1568024.42 00:07:36.567 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x8000 length 0x8000 00:07:36.567 Nvme2n3 : 5.53 126.29 7.89 0.00 0.00 975776.73 16736.89 1084066.26 00:07:36.567 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x0 length 0x2000 00:07:36.567 Nvme3n1 : 6.81 338.27 21.14 0.00 0.00 270298.83 428.50 1606741.07 00:07:36.567 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.567 Verification LBA range: start 0x2000 length 0x2000 00:07:36.567 Nvme3n1 : 5.64 119.20 7.45 0.00 0.00 1012012.67 92355.35 1793871.56 00:07:36.567 [2024-11-04T02:18:23.678Z] =================================================================================================================== 00:07:36.567 [2024-11-04T02:18:23.678Z] Total : 1943.64 121.48 0.00 0.00 774730.07 428.50 2077793.67 00:07:38.487 00:07:38.487 real 0m9.675s 00:07:38.487 user 0m17.719s 00:07:38.487 sys 0m0.266s 00:07:38.487 02:18:25 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.487 02:18:25 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:38.487 ************************************ 00:07:38.487 END TEST bdev_verify_big_io 00:07:38.487 ************************************ 00:07:38.487 02:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.487 02:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:38.487 02:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.487 02:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.487 ************************************ 00:07:38.487 START TEST bdev_write_zeroes 00:07:38.487 ************************************ 00:07:38.487 02:18:25 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.487 [2024-11-04 02:18:25.267659] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:38.487 [2024-11-04 02:18:25.267775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61992 ] 00:07:38.487 [2024-11-04 02:18:25.429411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.487 [2024-11-04 02:18:25.526218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.052 Running I/O for 1 seconds... 00:07:40.424 69888.00 IOPS, 273.00 MiB/s 00:07:40.424 Latency(us) 00:07:40.424 [2024-11-04T02:18:27.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.424 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.424 Nvme0n1 : 1.03 9911.39 38.72 0.00 0.00 12883.36 10989.88 24298.73 00:07:40.424 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.424 Nvme1n1p1 : 1.03 9899.15 38.67 0.00 0.00 12883.68 10788.23 24702.03 00:07:40.424 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.424 Nvme1n1p2 : 1.03 9886.48 38.62 0.00 0.00 12872.02 10788.23 23794.61 00:07:40.424 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.424 Nvme2n1 : 1.03 9874.80 38.57 0.00 0.00 12849.40 9981.64 23391.31 00:07:40.424 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.424 Nvme2n2 : 1.03 9863.73 38.53 0.00 0.00 12839.73 8973.39 22685.54 00:07:40.424 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.424 Nvme2n3 : 1.03 9852.71 38.49 0.00 0.00 12812.56 6604.01 22786.36 00:07:40.424 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.424 Nvme3n1 : 1.03 9841.55 38.44 0.00 0.00 12806.72 6956.90 24601.21 00:07:40.424 [2024-11-04T02:18:27.535Z] =================================================================================================================== 00:07:40.424 [2024-11-04T02:18:27.535Z] Total : 69129.82 270.04 0.00 0.00 12849.64 6604.01 24702.03 00:07:40.991 00:07:40.991 real 0m2.671s 00:07:40.991 user 0m2.384s 00:07:40.991 sys 0m0.173s 00:07:40.991 02:18:27 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.991 02:18:27 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:40.991 ************************************ 00:07:40.991 END TEST bdev_write_zeroes 00:07:40.991 ************************************ 00:07:40.992 02:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.992 02:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:40.992 02:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.992 02:18:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.992 ************************************ 00:07:40.992 START TEST bdev_json_nonenclosed 00:07:40.992 ************************************ 00:07:40.992 02:18:27 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.992 [2024-11-04 02:18:27.978569] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:40.992 [2024-11-04 02:18:27.978687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62045 ] 00:07:41.249 [2024-11-04 02:18:28.138646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.249 [2024-11-04 02:18:28.235009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.249 [2024-11-04 02:18:28.235088] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:41.249 [2024-11-04 02:18:28.235104] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:41.249 [2024-11-04 02:18:28.235113] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.508 00:07:41.508 real 0m0.493s 00:07:41.508 user 0m0.294s 00:07:41.508 sys 0m0.095s 00:07:41.508 02:18:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.508 02:18:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:41.508 ************************************ 00:07:41.508 END TEST bdev_json_nonenclosed 00:07:41.508 ************************************ 00:07:41.508 02:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.508 02:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:41.508 02:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.508 02:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.508 ************************************ 00:07:41.508 START TEST bdev_json_nonarray 00:07:41.508 ************************************ 00:07:41.508 02:18:28 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.508 [2024-11-04 02:18:28.520314] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:41.508 [2024-11-04 02:18:28.520435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62065 ] 00:07:41.767 [2024-11-04 02:18:28.680117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.767 [2024-11-04 02:18:28.777222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.767 [2024-11-04 02:18:28.777308] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:41.767 [2024-11-04 02:18:28.777326] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:41.767 [2024-11-04 02:18:28.777334] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.026 00:07:42.026 real 0m0.494s 00:07:42.026 user 0m0.304s 00:07:42.026 sys 0m0.087s 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.026 ************************************ 00:07:42.026 END TEST bdev_json_nonarray 00:07:42.026 ************************************ 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:42.026 02:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:42.026 02:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:42.026 02:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:42.026 02:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:42.026 02:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:42.026 02:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.026 ************************************ 00:07:42.026 START TEST bdev_gpt_uuid 00:07:42.026 ************************************ 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62096 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62096 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 62096 ']' 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.026 02:18:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:42.026 [2024-11-04 02:18:29.064642] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:42.026 [2024-11-04 02:18:29.064764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62096 ] 00:07:42.284 [2024-11-04 02:18:29.225982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.284 [2024-11-04 02:18:29.323468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.848 02:18:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:42.848 02:18:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:07:42.848 02:18:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:42.848 02:18:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.848 02:18:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:43.414 Some configs were skipped because the RPC state that can call them passed over. 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:43.414 { 00:07:43.414 "name": "Nvme1n1p1", 00:07:43.414 "aliases": [ 00:07:43.414 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:43.414 ], 00:07:43.414 "product_name": "GPT Disk", 00:07:43.414 "block_size": 4096, 00:07:43.414 "num_blocks": 655104, 00:07:43.414 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:43.414 "assigned_rate_limits": { 00:07:43.414 "rw_ios_per_sec": 0, 00:07:43.414 "rw_mbytes_per_sec": 0, 00:07:43.414 "r_mbytes_per_sec": 0, 00:07:43.414 "w_mbytes_per_sec": 0 00:07:43.414 }, 00:07:43.414 "claimed": false, 00:07:43.414 "zoned": false, 00:07:43.414 "supported_io_types": { 00:07:43.414 "read": true, 00:07:43.414 "write": true, 00:07:43.414 "unmap": true, 00:07:43.414 "flush": true, 00:07:43.414 "reset": true, 00:07:43.414 "nvme_admin": false, 00:07:43.414 "nvme_io": false, 00:07:43.414 "nvme_io_md": false, 00:07:43.414 "write_zeroes": true, 00:07:43.414 "zcopy": false, 00:07:43.414 "get_zone_info": false, 00:07:43.414 "zone_management": false, 00:07:43.414 "zone_append": false, 00:07:43.414 "compare": true, 00:07:43.414 "compare_and_write": false, 00:07:43.414 "abort": true, 00:07:43.414 "seek_hole": false, 00:07:43.414 "seek_data": false, 00:07:43.414 "copy": true, 00:07:43.414 "nvme_iov_md": false 00:07:43.414 }, 00:07:43.414 "driver_specific": { 00:07:43.414 "gpt": { 00:07:43.414 "base_bdev": "Nvme1n1", 00:07:43.414 "offset_blocks": 256, 00:07:43.414 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:43.414 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:43.414 "partition_name": "SPDK_TEST_first" 00:07:43.414 } 00:07:43.414 } 00:07:43.414 } 00:07:43.414 ]' 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.414 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:43.414 { 00:07:43.414 "name": "Nvme1n1p2", 00:07:43.414 "aliases": [ 00:07:43.414 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:43.414 ], 00:07:43.414 "product_name": "GPT Disk", 00:07:43.415 "block_size": 4096, 00:07:43.415 "num_blocks": 655103, 00:07:43.415 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:43.415 "assigned_rate_limits": { 00:07:43.415 "rw_ios_per_sec": 0, 00:07:43.415 "rw_mbytes_per_sec": 0, 00:07:43.415 "r_mbytes_per_sec": 0, 00:07:43.415 "w_mbytes_per_sec": 0 00:07:43.415 }, 00:07:43.415 "claimed": false, 00:07:43.415 "zoned": false, 00:07:43.415 "supported_io_types": { 00:07:43.415 "read": true, 00:07:43.415 "write": true, 00:07:43.415 "unmap": true, 00:07:43.415 "flush": true, 00:07:43.415 "reset": true, 00:07:43.415 "nvme_admin": false, 00:07:43.415 "nvme_io": false, 00:07:43.415 "nvme_io_md": false, 00:07:43.415 "write_zeroes": true, 00:07:43.415 "zcopy": false, 00:07:43.415 "get_zone_info": false, 00:07:43.415 "zone_management": false, 00:07:43.415 "zone_append": false, 00:07:43.415 "compare": true, 00:07:43.415 "compare_and_write": false, 00:07:43.415 "abort": true, 00:07:43.415 "seek_hole": false, 00:07:43.415 "seek_data": false, 00:07:43.415 "copy": true, 00:07:43.415 "nvme_iov_md": false 00:07:43.415 }, 00:07:43.415 "driver_specific": { 00:07:43.415 "gpt": { 00:07:43.415 "base_bdev": "Nvme1n1", 00:07:43.415 "offset_blocks": 655360, 00:07:43.415 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:43.415 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:43.415 "partition_name": "SPDK_TEST_second" 00:07:43.415 } 00:07:43.415 } 00:07:43.415 } 00:07:43.415 ]' 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62096 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 62096 ']' 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 62096 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62096 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:43.415 killing process with pid 62096 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62096' 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 62096 00:07:43.415 02:18:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 62096 00:07:45.315 00:07:45.315 real 0m2.976s 00:07:45.315 user 0m3.121s 00:07:45.315 sys 0m0.363s 00:07:45.315 02:18:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.315 02:18:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.315 ************************************ 00:07:45.315 END TEST bdev_gpt_uuid 00:07:45.315 ************************************ 00:07:45.315 02:18:32 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:45.315 02:18:32 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:45.315 02:18:32 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:45.315 02:18:32 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:45.315 02:18:32 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.315 02:18:32 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:45.315 02:18:32 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:45.315 02:18:32 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:45.315 02:18:32 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:45.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:45.574 Waiting for block devices as requested 00:07:45.574 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:45.574 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:45.574 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:45.574 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.857 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:50.857 02:18:37 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:50.857 02:18:37 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:51.120 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:51.120 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:51.120 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:51.120 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:51.120 02:18:37 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:51.120 00:07:51.120 real 0m54.680s 00:07:51.120 user 1m10.233s 00:07:51.120 sys 0m7.219s 00:07:51.120 02:18:38 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.120 02:18:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.120 ************************************ 00:07:51.120 END TEST blockdev_nvme_gpt 00:07:51.120 ************************************ 00:07:51.120 02:18:38 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:51.120 02:18:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.120 02:18:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.120 02:18:38 -- common/autotest_common.sh@10 -- # set +x 00:07:51.120 ************************************ 00:07:51.120 START TEST nvme 00:07:51.120 ************************************ 00:07:51.120 02:18:38 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:51.120 * Looking for test storage... 00:07:51.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:51.120 02:18:38 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.120 02:18:38 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.120 02:18:38 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.120 02:18:38 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.120 02:18:38 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.120 02:18:38 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.120 02:18:38 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.120 02:18:38 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.120 02:18:38 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.120 02:18:38 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.120 02:18:38 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.120 02:18:38 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.120 02:18:38 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.120 02:18:38 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.120 02:18:38 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.120 02:18:38 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:51.120 02:18:38 nvme -- scripts/common.sh@345 -- # : 1 00:07:51.120 02:18:38 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.120 02:18:38 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.120 02:18:38 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:51.120 02:18:38 nvme -- scripts/common.sh@353 -- # local d=1 00:07:51.120 02:18:38 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.120 02:18:38 nvme -- scripts/common.sh@355 -- # echo 1 00:07:51.120 02:18:38 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.120 02:18:38 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:51.120 02:18:38 nvme -- scripts/common.sh@353 -- # local d=2 00:07:51.120 02:18:38 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.120 02:18:38 nvme -- scripts/common.sh@355 -- # echo 2 00:07:51.120 02:18:38 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.120 02:18:38 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.120 02:18:38 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.120 02:18:38 nvme -- scripts/common.sh@368 -- # return 0 00:07:51.120 02:18:38 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.120 02:18:38 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.120 --rc genhtml_branch_coverage=1 00:07:51.120 --rc genhtml_function_coverage=1 00:07:51.120 --rc genhtml_legend=1 00:07:51.120 --rc geninfo_all_blocks=1 00:07:51.120 --rc geninfo_unexecuted_blocks=1 00:07:51.120 00:07:51.120 ' 00:07:51.120 02:18:38 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.120 --rc genhtml_branch_coverage=1 00:07:51.120 --rc genhtml_function_coverage=1 00:07:51.120 --rc genhtml_legend=1 00:07:51.120 --rc geninfo_all_blocks=1 00:07:51.120 --rc geninfo_unexecuted_blocks=1 00:07:51.120 00:07:51.120 ' 00:07:51.120 02:18:38 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.120 --rc genhtml_branch_coverage=1 00:07:51.120 --rc genhtml_function_coverage=1 00:07:51.120 --rc genhtml_legend=1 00:07:51.120 --rc geninfo_all_blocks=1 00:07:51.120 --rc geninfo_unexecuted_blocks=1 00:07:51.120 00:07:51.120 ' 00:07:51.120 02:18:38 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.120 --rc genhtml_branch_coverage=1 00:07:51.120 --rc genhtml_function_coverage=1 00:07:51.120 --rc genhtml_legend=1 00:07:51.120 --rc geninfo_all_blocks=1 00:07:51.120 --rc geninfo_unexecuted_blocks=1 00:07:51.120 00:07:51.120 ' 00:07:51.120 02:18:38 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:51.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:52.258 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.258 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.258 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.258 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.258 02:18:39 nvme -- nvme/nvme.sh@79 -- # uname 00:07:52.258 02:18:39 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:52.258 02:18:39 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:52.258 Waiting for stub to ready for secondary processes... 00:07:52.258 02:18:39 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:52.258 02:18:39 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:52.258 02:18:39 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:07:52.258 02:18:39 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:07:52.258 02:18:39 nvme -- common/autotest_common.sh@1073 -- # stubpid=62730 00:07:52.258 02:18:39 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:52.258 02:18:39 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:07:52.258 02:18:39 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:52.259 02:18:39 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/62730 ]] 00:07:52.259 02:18:39 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:07:52.259 [2024-11-04 02:18:39.267748] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:07:52.259 [2024-11-04 02:18:39.267984] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:53.198 [2024-11-04 02:18:40.028336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.199 [2024-11-04 02:18:40.135826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.199 [2024-11-04 02:18:40.136767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.199 [2024-11-04 02:18:40.136854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.199 [2024-11-04 02:18:40.152399] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:53.199 [2024-11-04 02:18:40.152542] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:53.199 [2024-11-04 02:18:40.166915] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:53.199 [2024-11-04 02:18:40.167107] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:53.199 [2024-11-04 02:18:40.171034] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:53.199 [2024-11-04 02:18:40.171393] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:53.199 [2024-11-04 02:18:40.171673] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:53.199 [2024-11-04 02:18:40.175353] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:53.199 [2024-11-04 02:18:40.175681] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:53.199 [2024-11-04 02:18:40.175768] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:53.199 [2024-11-04 02:18:40.178737] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:53.199 [2024-11-04 02:18:40.179159] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:53.199 [2024-11-04 02:18:40.179293] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:53.199 [2024-11-04 02:18:40.179362] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:53.199 [2024-11-04 02:18:40.179447] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:53.199 02:18:40 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:53.199 02:18:40 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:07:53.199 done. 00:07:53.199 02:18:40 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:53.199 02:18:40 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:07:53.199 02:18:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:53.199 02:18:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.199 ************************************ 00:07:53.199 START TEST nvme_reset 00:07:53.199 ************************************ 00:07:53.199 02:18:40 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:53.457 Initializing NVMe Controllers 00:07:53.457 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:53.457 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:53.457 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:53.457 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:53.457 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:53.457 00:07:53.457 real 0m0.207s 00:07:53.457 user 0m0.071s 00:07:53.457 sys 0m0.094s 00:07:53.457 02:18:40 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:53.457 02:18:40 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 ************************************ 00:07:53.457 END TEST nvme_reset 00:07:53.457 ************************************ 00:07:53.457 02:18:40 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:53.457 02:18:40 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:53.457 02:18:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:53.457 02:18:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 ************************************ 00:07:53.457 START TEST nvme_identify 00:07:53.457 ************************************ 00:07:53.457 02:18:40 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:07:53.457 02:18:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:53.457 02:18:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:53.457 02:18:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:53.457 02:18:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:53.457 02:18:40 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:53.457 02:18:40 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:53.457 02:18:40 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:53.457 02:18:40 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:53.457 02:18:40 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:53.457 02:18:40 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:53.457 02:18:40 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:53.457 02:18:40 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:53.717 [2024-11-04 02:18:40.754434] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62751 termina===================================================== 00:07:53.717 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:53.717 ===================================================== 00:07:53.717 Controller Capabilities/Features 00:07:53.717 ================================ 00:07:53.717 Vendor ID: 1b36 00:07:53.717 Subsystem Vendor ID: 1af4 00:07:53.717 Serial Number: 12340 00:07:53.717 Model Number: QEMU NVMe Ctrl 00:07:53.717 Firmware Version: 8.0.0 00:07:53.717 Recommended Arb Burst: 6 00:07:53.717 IEEE OUI Identifier: 00 54 52 00:07:53.717 Multi-path I/O 00:07:53.717 May have multiple subsystem ports: No 00:07:53.717 May have multiple controllers: No 00:07:53.717 Associated with SR-IOV VF: No 00:07:53.717 Max Data Transfer Size: 524288 00:07:53.717 Max Number of Namespaces: 256 00:07:53.717 Max Number of I/O Queues: 64 00:07:53.717 NVMe Specification Version (VS): 1.4 00:07:53.717 NVMe Specification Version (Identify): 1.4 00:07:53.717 Maximum Queue Entries: 2048 00:07:53.717 Contiguous Queues Required: Yes 00:07:53.717 Arbitration Mechanisms Supported 00:07:53.717 Weighted Round Robin: Not Supported 00:07:53.717 Vendor Specific: Not Supported 00:07:53.717 Reset Timeout: 7500 ms 00:07:53.717 Doorbell Stride: 4 bytes 00:07:53.717 NVM Subsystem Reset: Not Supported 00:07:53.717 Command Sets Supported 00:07:53.717 NVM Command Set: Supported 00:07:53.717 Boot Partition: Not Supported 00:07:53.717 Memory Page Size Minimum: 4096 bytes 00:07:53.717 Memory Page Size Maximum: 65536 bytes 00:07:53.717 Persistent Memory Region: Not Supported 00:07:53.717 Optional Asynchronous Events Supported 00:07:53.717 Namespace Attribute Notices: Supported 00:07:53.717 Firmware Activation Notices: Not Supported 00:07:53.717 ANA Change Notices: Not Supported 00:07:53.717 PLE Aggregate Log Change Notices: Not Supported 00:07:53.717 LBA Status Info Alert Notices: Not Supported 00:07:53.717 EGE Aggregate Log Change Notices: Not Supported 00:07:53.717 Normal NVM Subsystem Shutdown event: Not Supported 00:07:53.717 Zone Descriptor Change Notices: Not Supported 00:07:53.717 Discovery Log Change Notices: Not Supported 00:07:53.717 Controller Attributes 00:07:53.717 128-bit Host Identifier: Not Supported 00:07:53.717 Non-Operational Permissive Mode: Not Supported 00:07:53.717 NVM Sets: Not Supported 00:07:53.717 Read Recovery Levels: Not Supported 00:07:53.717 Endurance Groups: Not Supported 00:07:53.717 Predictable Latency Mode: Not Supported 00:07:53.717 Traffic Based Keep ALive: Not Supported 00:07:53.717 Namespace Granularity: Not Supported 00:07:53.717 SQ Associations: Not Supported 00:07:53.717 UUID List: Not Supported 00:07:53.717 Multi-Domain Subsystem: Not Supported 00:07:53.717 Fixed Capacity Management: Not Supported 00:07:53.717 Variable Capacity Management: Not Supported 00:07:53.717 Delete Endurance Group: Not Supported 00:07:53.717 Delete NVM Set: Not Supported 00:07:53.717 Extended LBA Formats Supported: Supported 00:07:53.717 Flexible Data Placement Supported: Not Supported 00:07:53.717 00:07:53.717 Controller Memory Buffer Support 00:07:53.717 ================================ 00:07:53.717 Supported: No 00:07:53.717 00:07:53.717 Persistent Memory Region Support 00:07:53.717 ================================ 00:07:53.717 Supported: No 00:07:53.717 00:07:53.717 Admin Command Set Attributes 00:07:53.717 ============================ 00:07:53.717 Security Send/Receive: Not Supported 00:07:53.718 Format NVM: Supported 00:07:53.718 Firmware Activate/Download: Not Supported 00:07:53.718 Namespace Management: Supported 00:07:53.718 Device Self-Test: Not Supported 00:07:53.718 Directives: Supported 00:07:53.718 NVMe-MI: Not Supported 00:07:53.718 Virtualization Management: Not Supported 00:07:53.718 Doorbell Buffer Config: Supported 00:07:53.718 Get LBA Status Capability: Not Supported 00:07:53.718 Command & Feature Lockdown Capability: Not Supported 00:07:53.718 Abort Command Limit: 4 00:07:53.718 Async Event Request Limit: 4 00:07:53.718 Number of Firmware Slots: N/A 00:07:53.718 Firmware Slot 1 Read-Only: N/A 00:07:53.718 Firmware Activation Without Reset: N/A 00:07:53.718 Multiple Update Detection Support: N/A 00:07:53.718 Firmware Update Granularity: No Information Provided 00:07:53.718 Per-Namespace SMART Log: Yes 00:07:53.718 Asymmetric Namespace Access Log Page: Not Supported 00:07:53.718 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:53.718 Command Effects Log Page: Supported 00:07:53.718 Get Log Page Extended Data: Supported 00:07:53.718 Telemetry Log Pages: Not Supported 00:07:53.718 Persistent Event Log Pages: Not Supported 00:07:53.718 Supported Log Pages Log Page: May Support 00:07:53.718 Commands Supported & Effects Log Page: Not Supported 00:07:53.718 Feature Identifiers & Effects Log Page:May Support 00:07:53.718 NVMe-MI Commands & Effects Log Page: May Support 00:07:53.718 Data Area 4 for Telemetry Log: Not Supported 00:07:53.718 Error Log Page Entries Supported: 1 00:07:53.718 Keep Alive: Not Supported 00:07:53.718 00:07:53.718 NVM Command Set Attributes 00:07:53.718 ========================== 00:07:53.718 Submission Queue Entry Size 00:07:53.718 Max: 64 00:07:53.718 Min: 64 00:07:53.718 Completion Queue Entry Size 00:07:53.718 Max: 16 00:07:53.718 Min: 16 00:07:53.718 Number of Namespaces: 256 00:07:53.718 Compare Command: Supported 00:07:53.718 Write Uncorrectable Command: Not Supported 00:07:53.718 Dataset Management Command: Supported 00:07:53.718 Write Zeroes Command: Supported 00:07:53.718 Set Features Save Field: Supported 00:07:53.718 Reservations: Not Supported 00:07:53.718 Timestamp: Supported 00:07:53.718 Copy: Supported 00:07:53.718 Volatile Write Cache: Present 00:07:53.718 Atomic Write Unit (Normal): 1 00:07:53.718 Atomic Write Unit (PFail): 1 00:07:53.718 Atomic Compare & Write Unit: 1 00:07:53.718 Fused Compare & Write: Not Supported 00:07:53.718 Scatter-Gather List 00:07:53.718 SGL Command Set: Supported 00:07:53.718 SGL Keyed: Not Supported 00:07:53.718 SGL Bit Bucket Descriptor: Not Supported 00:07:53.718 SGL Metadata Pointer: Not Supported 00:07:53.718 Oversized SGL: Not Supported 00:07:53.718 SGL Metadata Address: Not Supported 00:07:53.718 SGL Offset: Not Supported 00:07:53.718 Transport SGL Data Block: Not Supported 00:07:53.718 Replay Protected Memory Block: Not Supported 00:07:53.718 00:07:53.718 Firmware Slot Information 00:07:53.718 ========================= 00:07:53.718 Active slot: 1 00:07:53.718 Slot 1 Firmware Revision: 1.0 00:07:53.718 00:07:53.718 00:07:53.718 Commands Supported and Effects 00:07:53.718 ============================== 00:07:53.718 Admin Commands 00:07:53.718 -------------- 00:07:53.718 Delete I/O Submission Queue (00h): Supported 00:07:53.718 Create I/O Submission Queue (01h): Supported 00:07:53.718 Get Log Page (02h): Supported 00:07:53.718 Delete I/O Completion Queue (04h): Supported 00:07:53.718 Create I/O Completion Queue (05h): Supported 00:07:53.718 Identify (06h): Supported 00:07:53.718 Abort (08h): Supported 00:07:53.718 Set Features (09h): Supported 00:07:53.718 Get Features (0Ah): Supported 00:07:53.718 Asynchronous Event Request (0Ch): Supported 00:07:53.718 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:53.718 Directive Send (19h): Supported 00:07:53.718 Directive Receive (1Ah): Supported 00:07:53.718 Virtualization Management (1Ch): Supported 00:07:53.718 Doorbell Buffer Config (7Ch): Supported 00:07:53.718 Format NVM (80h): Supported LBA-Change 00:07:53.718 I/O Commands 00:07:53.718 ------------ 00:07:53.718 Flush (00h): Supported LBA-Change 00:07:53.718 Write (01h): Supported LBA-Change 00:07:53.718 Read (02h): Supported 00:07:53.718 Compare (05h): Supported 00:07:53.718 Write Zeroes (08h): Supported LBA-Change 00:07:53.718 Dataset Management (09h): Supported LBA-Change 00:07:53.718 Unknown (0Ch): Supported 00:07:53.718 Unknown (12h): Supported 00:07:53.718 Copy (19h): Supported LBA-Change 00:07:53.718 Unknown (1Dh): Supported LBA-Change 00:07:53.718 00:07:53.718 Error Log 00:07:53.718 ========= 00:07:53.718 00:07:53.718 Arbitration 00:07:53.718 =========== 00:07:53.718 Arbitration Burst: no limit 00:07:53.718 00:07:53.718 Power Management 00:07:53.718 ================ 00:07:53.718 Number of Power States: 1 00:07:53.718 Current Power State: Power State #0 00:07:53.718 Power State #0: 00:07:53.718 Max Power: 25.00 W 00:07:53.718 Non-Operational State: Operational 00:07:53.718 Entry Latency: 16 microseconds 00:07:53.718 Exit Latency: 4 microseconds 00:07:53.718 Relative Read Throughput: 0 00:07:53.718 Relative Read Latency: 0 00:07:53.718 Relative Write Throughput: 0 00:07:53.718 Relative Write Latency: 0 00:07:53.718 Idle Power: Not Reported 00:07:53.718 Active Power: Not Reported 00:07:53.718 Non-Operational Permissive Mode: Not Supported 00:07:53.718 00:07:53.718 Health Information 00:07:53.718 ================== 00:07:53.718 Critical Warnings: 00:07:53.718 Available Spare Space: OK 00:07:53.718 Temperature: OK 00:07:53.718 Device Reliability: OK 00:07:53.718 Read Only: No 00:07:53.718 Volatile Memory Backup: OK 00:07:53.718 Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.718 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:53.718 Available Spare: 0% 00:07:53.718 Available Spare Threshold: 0% 00:07:53.718 Life Percentage Used: 0% 00:07:53.718 Data Units Read: 695 00:07:53.718 Data Units Written: 623 00:07:53.718 Host Read Commands: 41565 00:07:53.718 Host Write Commands: 41351 00:07:53.718 Controller Busy Time: 0 minutes 00:07:53.718 Power Cycles: 0 00:07:53.718 Power On Hours: 0 hours 00:07:53.718 Unsafe Shutdowns: 0 00:07:53.718 Unrecoverable Media Errors: 0 00:07:53.718 Lifetime Error Log Entries: 0 00:07:53.718 Warning Temperature Time: 0 minutes 00:07:53.718 Critical Temperature Time: 0 minutes 00:07:53.718 00:07:53.718 Number of Queues 00:07:53.718 ================ 00:07:53.718 Number of I/O Submission Queues: 64 00:07:53.718 Number of I/O Completion Queues: 64 00:07:53.718 00:07:53.718 ZNS Specific Controller Data 00:07:53.718 ============================ 00:07:53.718 Zone Append Size Limit: 0 00:07:53.718 00:07:53.718 00:07:53.718 Active Namespaces 00:07:53.719 ================= 00:07:53.719 Namespace ID:1 00:07:53.719 Error Recovery Timeout: Unlimited 00:07:53.719 Command Set Identifier: NVM (00h) 00:07:53.719 Deallocate: Supported 00:07:53.719 Deallocated/Unwritten Error: Supported 00:07:53.719 Deallocated Read Value: All 0x00 00:07:53.719 Deallocate in Write Zeroes: Not Supported 00:07:53.719 Deallocated Guard Field: 0xFFFF 00:07:53.719 Flush: Supported 00:07:53.719 Reservation: Not Supported 00:07:53.719 Metadata Transferred as: Separate Metadata Buffer 00:07:53.719 Namespace Sharing Capabilities: Private 00:07:53.719 Size (in LBAs): 1548666 (5GiB) 00:07:53.719 Capacity (in LBAs): 1548666 (5GiB) 00:07:53.719 Utilization (in LBAs): 1548666 (5GiB) 00:07:53.719 Thin Provisioning: Not Supported 00:07:53.719 Per-NS Atomic Units: No 00:07:53.719 Maximum Single Source Range Length: 128 00:07:53.719 Maximum Copy Length: 128 00:07:53.719 Maximum Source Range Count: 128 00:07:53.719 NGUID/EUI64 Never Reused: No 00:07:53.719 Namespace Write Protected: No 00:07:53.719 Number of LBA Formats: 8 00:07:53.719 Current LBA Format: LBA Format #07 00:07:53.719 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.719 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.719 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.719 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.719 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.719 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.719 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.719 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.719 00:07:53.719 NVM Specific Namespace Data 00:07:53.719 =========================== 00:07:53.719 Logical Block Storage Tag Mask: 0 00:07:53.719 Protection Information Capabilities: 00:07:53.719 16b Guard Protection Information Storage Tag Support: No 00:07:53.719 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.719 Storage Tag Check Read Support: No 00:07:53.719 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.719 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.719 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.719 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.719 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.719 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.719 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.719 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.719 ===================================================== 00:07:53.719 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:53.719 ===================================================== 00:07:53.719 Controller Capabilities/Features 00:07:53.719 ================================ 00:07:53.719 Vendor ID: 1b36 00:07:53.719 Subsystem Vendor ID: 1af4 00:07:53.719 Serial Number: 12341 00:07:53.719 Model Number: QEMU NVMe Ctrl 00:07:53.719 Firmware Version: 8.0.0 00:07:53.719 Recommended Arb Burst: 6 00:07:53.719 IEEE OUI Identifier: 00 54 52 00:07:53.719 Multi-path I/O 00:07:53.719 May have multiple subsystem ports: No 00:07:53.719 May have multiple controllers: No 00:07:53.719 Associated with SR-IOV VF: No 00:07:53.719 Max Data Transfer Size: 524288 00:07:53.719 Max Number of Namespaces: 256 00:07:53.719 Max Number of I/O Queues: 64 00:07:53.719 NVMe Specification Version (VS): 1.4 00:07:53.719 NVMe Specification Version (Identify): 1.4 00:07:53.719 Maximum Queue Entries: 2048 00:07:53.719 Contiguous Queues Required: Yes 00:07:53.719 Arbitration Mechanisms Supported 00:07:53.719 Weighted Round Robin: Not Supported 00:07:53.719 Vendor Specific: Not Supported 00:07:53.719 Reset Timeout: 7500 ms 00:07:53.719 Doorbell Stride: 4 bytes 00:07:53.719 NVM Subsystem Reset: Not Supported 00:07:53.719 Command Sets Supported 00:07:53.719 NVM Command Set: Supported 00:07:53.719 Boot Partition: Not Supported 00:07:53.719 Memory Page Size Minimum: 4096 bytes 00:07:53.719 Memory Page Size Maximum: 65536 bytes 00:07:53.719 Persistent Memory Region: Not Supported 00:07:53.719 Optional Asynchronous Events Supported 00:07:53.719 Namespace Attribute Notices: Supported 00:07:53.719 Firmware Activation Notices: Not Supported 00:07:53.719 ANA Change Notices: Not Supported 00:07:53.719 PLE Aggregate Log Change Notices: Not Supported 00:07:53.719 LBA Status Info Alert Notices: Not Supported 00:07:53.719 EGE Aggregate Log Change Notices: Not Supported 00:07:53.719 Normal NVM Subsystem Shutdown event: Not Supported 00:07:53.719 Zone Descriptor Change Notices: Not Supported 00:07:53.719 Discovery Log Change Notices: Not Supported 00:07:53.719 Controller Attributes 00:07:53.719 128-bit Host Identifier: Not Supported 00:07:53.719 Non-Operational Permissive Mode: Not Supported 00:07:53.719 NVM Sets: Not Supported 00:07:53.719 Read Recovery Levels: Not Supported 00:07:53.719 Endurance Groups: Not Supported 00:07:53.719 Predictable Latency Mode: Not Supported 00:07:53.719 Traffic Based Keep ALive: Not Supported 00:07:53.719 Namespace Granularity: Not Supported 00:07:53.719 SQ Associations: Not Supported 00:07:53.719 UUID List: Not Supported 00:07:53.719 Multi-Domain Subsystem: Not Supported 00:07:53.719 Fixed Capacity Management: Not Supported 00:07:53.719 Variable Capacity Management: Not Supported 00:07:53.719 Delete Endurance Group: Not Supported 00:07:53.719 Delete NVM Set: Not Supported 00:07:53.719 Extended LBA Formats Supported: Supported 00:07:53.719 Flexible Data Placement Supported: Not Supported 00:07:53.719 00:07:53.719 Controller Memory Buffer Support 00:07:53.719 ================================ 00:07:53.719 Supported: No 00:07:53.719 00:07:53.719 Persistent Memory Region Support 00:07:53.719 ================================ 00:07:53.719 Supported: No 00:07:53.719 00:07:53.719 Admin Command Set Attributes 00:07:53.719 ============================ 00:07:53.719 Security Send/Receive: Not Supported 00:07:53.719 Format NVM: Supported 00:07:53.719 Firmware Activate/Download: Not Supported 00:07:53.719 Namespace Management: Supported 00:07:53.719 Device Self-Test: Not Supported 00:07:53.719 Directives: Supported 00:07:53.719 NVMe-MI: Not Supported 00:07:53.719 Virtualization Management: Not Supported 00:07:53.719 Doorbell Buffer Config: Supported 00:07:53.719 Get LBA Status Capability: Not Supported 00:07:53.719 Command & Feature Lockdown Capability: Not Supported 00:07:53.719 Abort Command Limit: 4 00:07:53.719 Async Event Request Limit: 4 00:07:53.719 Number of Firmware Slots: N/A 00:07:53.719 Firmware Slot 1 Read-Only: N/A 00:07:53.719 Firmware Activation Without Reset: N/A 00:07:53.719 Multiple Update Detection Support: N/A 00:07:53.719 Firmware Update Granularity: No Information Provided 00:07:53.719 Per-Namespace SMART Log: Yes 00:07:53.719 Asymmetric Namespace Access Log Page: Not Supported 00:07:53.719 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:53.719 Command Effects Log Page: Supported 00:07:53.719 Get Log Page Extended Data: Supported 00:07:53.719 Telemetry Log Pages: Not Supported 00:07:53.719 Persistent Event Log Pages: Not Supported 00:07:53.719 Supported Log Pages Log Page: May Support 00:07:53.719 Commands Supported & Effects Log Page: Not Supported 00:07:53.719 Feature Identifiers & Effects Log Page:May Support 00:07:53.719 NVMe-MI Commands & Effects Log Page: May Support 00:07:53.719 Data Area 4 for Telemetry Log: Not Supported 00:07:53.719 Error Log Page Entries Supported: 1 00:07:53.719 Keep Alive: Not Supported 00:07:53.719 00:07:53.719 NVM Command Set Attributes 00:07:53.719 ========================== 00:07:53.719 Submission Queue Entry Size 00:07:53.719 Max: 64 00:07:53.719 Min: 64 00:07:53.719 Completion Queue Entry Size 00:07:53.719 Max: 16 00:07:53.719 Min: 16 00:07:53.719 Number of Namespaces: 256 00:07:53.719 Compare Command: Supported 00:07:53.719 Write Uncorrectable Command: Not Supported 00:07:53.719 Dataset Management Command: Supported 00:07:53.719 Write Zeroes Command: Supported 00:07:53.719 Set Features Save Field: Supported 00:07:53.719 Reservations: Not Supported 00:07:53.719 Timestamp: Supported 00:07:53.719 Copy: Supported 00:07:53.719 Volatile Write Cache: Present 00:07:53.719 Atomic Write Unit (Normal): 1 00:07:53.719 Atomic Write Unit (PFail): 1 00:07:53.719 Atomic Compare & Write Unit: 1 00:07:53.719 Fused Compare & Write: Not Supported 00:07:53.719 Scatter-Gather List 00:07:53.719 SGL Command Set: Supported 00:07:53.719 SGL Keyed: Not Supported 00:07:53.719 SGL Bit Bucket Descriptor: Not Supported 00:07:53.719 SGL Metadata Pointer: Not Supported 00:07:53.719 Oversized SGL: Not Supported 00:07:53.719 SGL Metadata Address: Not Supported 00:07:53.719 SGL Offset: Not Supported 00:07:53.719 Transport SGL Data Block: Not Supported 00:07:53.719 Replay Protected Memory Block: Not Supported 00:07:53.719 00:07:53.719 Firmware Slot Information 00:07:53.719 ========================= 00:07:53.719 Active slot: 1 00:07:53.719 Slot 1 Firmware Revision: 1.0 00:07:53.720 00:07:53.720 00:07:53.720 Commands Supported and Effects 00:07:53.720 ============================== 00:07:53.720 Admin Commands 00:07:53.720 -------------- 00:07:53.720 Delete I/O Submission Queue (00h): Supported 00:07:53.720 Create I/O Submission Queue (01h): Supported 00:07:53.720 Get Log Page (02h): Supported 00:07:53.720 Delete I/O Completion Queue (04h): Supported 00:07:53.720 Create I/O Completion Queue (05h): Supported 00:07:53.720 Identify (06h): Supported 00:07:53.720 Abort (08h): Supported 00:07:53.720 Set Features (09h): Supported 00:07:53.720 Get Features (0Ah): Supported 00:07:53.720 Asynchronous Event Request (0Ch): Supported 00:07:53.720 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:53.720 Directive Send (19h): Supported 00:07:53.720 Directive Receive (1Ah): Supported 00:07:53.720 Virtualization Management (1Ch): Supported 00:07:53.720 Doorbell Buffer Config (7Ch): Supported 00:07:53.720 Format NVM (80h): Supported LBA-Change 00:07:53.720 I/O Commands 00:07:53.720 ------------ 00:07:53.720 Flush (00h): Supported LBA-Change 00:07:53.720 Write (01h): Supported LBA-Change 00:07:53.720 Read (02h): Supported 00:07:53.720 Compare (05h): Supported 00:07:53.720 Write Zeroes (08h): Supported LBA-Change 00:07:53.720 Dataset Management (09h): Supported LBA-Change 00:07:53.720 Unknown (0Ch): Supported 00:07:53.720 Unknown (12h): Supported 00:07:53.720 Copy (19h): Supported LBA-Change 00:07:53.720 Unknown (1Dh): Supported LBA-Change 00:07:53.720 00:07:53.720 Error Log 00:07:53.720 ========= 00:07:53.720 00:07:53.720 Arbitration 00:07:53.720 =========== 00:07:53.720 Arbitration Burst: no limit 00:07:53.720 00:07:53.720 Power Management 00:07:53.720 ================ 00:07:53.720 Number of Power States: 1 00:07:53.720 Current Power State: Power State #0 00:07:53.720 Power State #0: 00:07:53.720 Max Power: 25.00 W 00:07:53.720 Non-Operational State: Operational 00:07:53.720 Entry Latency: 16 microseconds 00:07:53.720 Exit Latency: 4 microseconds 00:07:53.720 Relative Read Throughput: 0 00:07:53.720 Relative Read Latency: 0 00:07:53.720 Relative Write Throughput: 0 00:07:53.720 Relative Write Latency: 0 00:07:53.720 Idle Power: Not Reported 00:07:53.720 Active Power: Not Reported 00:07:53.720 Non-Operational Permissive Mode: Not Supported 00:07:53.720 00:07:53.720 Health Information 00:07:53.720 ================== 00:07:53.720 Critical Warnings: 00:07:53.720 Available Spare Space: OK 00:07:53.720 Temperature: OK 00:07:53.720 Device Reliability: OK 00:07:53.720 Read Only: No 00:07:53.720 Volatile Memory Backup: OK 00:07:53.720 Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.720 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:53.720 Available Spare: 0% 00:07:53.720 Available Spare Threshold: 0% 00:07:53.720 Life Percentage Used: 0% 00:07:53.720 Data Units Read: 1086 00:07:53.720 Data Units Written: 947 00:07:53.720 Host Read Commands: 61292 00:07:53.720 Host Write Commands: 59969 00:07:53.720 Controller Busy Time: 0 minutes 00:07:53.720 Power Cycles: 0 00:07:53.720 Power On Hours: 0 hours 00:07:53.720 Unsafe Shutdowns: 0 00:07:53.720 Unrecoverable Media Errors: 0 00:07:53.720 Lifetime Error Log Entries: 0 00:07:53.720 Warning Temperature Time: 0 minutes 00:07:53.720 Critical Temperature Time: 0 minutes 00:07:53.720 00:07:53.720 Number of Queues 00:07:53.720 ================ 00:07:53.720 Number of I/O Submission Queues: 64 00:07:53.720 Number of I/O Completion Queues: 64 00:07:53.720 00:07:53.720 ZNS Specific Controller Data 00:07:53.720 ============================ 00:07:53.720 Zone Append Size Limit: 0 00:07:53.720 00:07:53.720 00:07:53.720 Active Namespaces 00:07:53.720 ================= 00:07:53.720 Namespace ID:1 00:07:53.720 Error Recovery Timeout: Unlimited 00:07:53.720 Command Set Identifier: NVM (00h) 00:07:53.720 Deallocate: Supported 00:07:53.720 Deallocated/Unwritten Error: Supported 00:07:53.720 Deallocated Read Value: All 0x00 00:07:53.720 Deallocate in Write Zeroes: Not Supported 00:07:53.720 Deallocated Guard Field: 0xFFFF 00:07:53.720 Flush: Supported 00:07:53.720 Reservation: Not Supported 00:07:53.720 Namespace Sharing Capabilities: Private 00:07:53.720 Size (in LBAs): 1310720 (5GiB) 00:07:53.720 Capacity (in LBAs): 1310720 (5GiB) 00:07:53.720 Utilization (in LBAs): 1310720 (5GiB) 00:07:53.720 Thin Provisioning: Not Supported 00:07:53.720 Per-NS Atomic Units: No 00:07:53.720 Maximum Single Source Range Length: 128 00:07:53.720 Maximum Copy Length: 128 00:07:53.720 Maximum Source Range Count: 128 00:07:53.720 NGUID/EUI64 Never Reused: No 00:07:53.720 Namespace Write Protected: No 00:07:53.720 Number of LBA Formats: 8 00:07:53.720 Current LBA Format: LBA Format #04 00:07:53.720 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.720 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.720 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.720 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.720 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.720 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.720 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.720 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.720 00:07:53.720 NVM Specific Namespace Data 00:07:53.720 =========================== 00:07:53.720 Logical Block Storage Tag Mask: 0 00:07:53.720 Protection Information Capabilities: 00:07:53.720 16b Guard Protection Information Storage Tag Support: No 00:07:53.720 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.720 Storage Tag Check Read Support: No 00:07:53.720 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.720 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.720 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.720 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.720 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.720 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.720 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.720 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.720 ===================================================== 00:07:53.720 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:53.720 ===================================================== 00:07:53.720 Controller Capabilities/Features 00:07:53.720 ================================ 00:07:53.720 Vendor ID: 1b36 00:07:53.720 Subsystem Vendor ID: 1af4 00:07:53.720 Serial Number: 12343 00:07:53.720 Model Number: QEMU NVMe Ctrl 00:07:53.720 Firmware Version: 8.0.0 00:07:53.720 Recommended Arb Burst: 6 00:07:53.720 IEEE OUI Identifier: 00 54 52 00:07:53.720 Multi-path I/O 00:07:53.720 May have multiple subsystem ports: No 00:07:53.720 May have multiple controllers: Yes 00:07:53.720 Associated with SR-IOV VF: No 00:07:53.720 Max Data Transfer Size: 524288 00:07:53.720 Max Number of Namespaces: 256 00:07:53.720 Max Number of I/O Queues: 64 00:07:53.720 NVMe Specification Version (VS): 1.4 00:07:53.720 NVMe Specification Version (Identify): 1.4 00:07:53.720 Maximum Queue Entries: 2048 00:07:53.720 Contiguous Queues Required: Yes 00:07:53.720 Arbitration Mechanisms Supported 00:07:53.720 Weighted Round Robin: Not Supported 00:07:53.720 Vendor Specific: Not Supported 00:07:53.720 Reset Timeout: 7500 ms 00:07:53.720 Doorbell Stride: 4 bytes 00:07:53.720 NVM Subsystem Reset: Not Supported 00:07:53.720 Command Sets Supported 00:07:53.720 NVM Command Set: Supported 00:07:53.720 Boot Partition: Not Supported 00:07:53.720 Memory Page Size Minimum: 4096 bytes 00:07:53.720 Memory Page Size Maximum: 65536 bytes 00:07:53.720 Persistent Memory Region: Not Supported 00:07:53.720 Optional Asynchronous Events Supported 00:07:53.720 Namespace Attribute Notices: Supported 00:07:53.720 Firmware Activation Notices: Not Supported 00:07:53.720 ANA Change Notices: Not Supported 00:07:53.720 PLE Aggregate Log Change Notices: Not Supported 00:07:53.720 LBA Status Info Alert Notices: Not Supported 00:07:53.720 EGE Aggregate Log Change Notices: Not Supported 00:07:53.720 Normal NVM Subsystem Shutdown event: Not Supported 00:07:53.720 Zone Descriptor Change Notices: Not Supported 00:07:53.720 Discovery Log Change Notices: Not Supported 00:07:53.720 Controller Attributes 00:07:53.720 128-bit Host Identifier: Not Supported 00:07:53.720 Non-Operational Permissive Mode: Not Supported 00:07:53.720 NVM Sets: Not Supported 00:07:53.720 Read Recovery Levels: Not Supported 00:07:53.720 Endurance Groups: Supported 00:07:53.720 Predictable Latency Mode: Not Supported 00:07:53.720 Traffic Based Keep ALive: Not Supported 00:07:53.720 Namespace Granularity: Not Supported 00:07:53.720 SQ Associations: Not Supported 00:07:53.720 UUID List: Not Supported 00:07:53.720 Multi-Domain Subsystem: Not Supported 00:07:53.721 Fixed Capacity Management: Not Supported 00:07:53.721 Variable Capacity Management: Not Supported 00:07:53.721 Delete Endurance Group: Not Supported 00:07:53.721 Delete NVM Set: Not Supported 00:07:53.721 Extended LBA Formats Supported: Supported 00:07:53.721 Flexible Data Placement Supported: Supported 00:07:53.721 00:07:53.721 Controller Memory Buffer Support 00:07:53.721 ================================ 00:07:53.721 Supported: No 00:07:53.721 00:07:53.721 Persistent Memory Region Support 00:07:53.721 ================================ 00:07:53.721 Supported: No 00:07:53.721 00:07:53.721 Admin Command Set Attributes 00:07:53.721 ============================ 00:07:53.721 Security Send/Receive: Not Supported 00:07:53.721 Format NVM: Supported 00:07:53.721 Firmware Activate/Download: Not Supported 00:07:53.721 Namespace Management: Supported 00:07:53.721 Device Self-Test: Not Supported 00:07:53.721 Directives: Supported 00:07:53.721 NVMe-MI: Not Supported 00:07:53.721 Virtualization Management: Not Supported 00:07:53.721 Doorbell Buffer Config: Supported 00:07:53.721 Get LBA Status Capability: Not Supported 00:07:53.721 Command & Feature Lockdown Capability: Not Supported 00:07:53.721 Abort Command Limit: 4 00:07:53.721 Async Event Request Limit: 4 00:07:53.721 Number of Firmware Slots: N/A 00:07:53.721 Firmware Slot 1 Read-Only: N/A 00:07:53.721 Firmware Activation Without Reset: N/A 00:07:53.721 Multiple Update Detection Support: N/A 00:07:53.721 Firmware Update Granularity: No Information Provided 00:07:53.721 Per-Namespace SMART Log: Yes 00:07:53.721 Asymmetric Namespace Access Log Page: Not Supported 00:07:53.721 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:53.721 Command Effects Log Page: Supported 00:07:53.721 Get Log Page Extended Data: Supported 00:07:53.721 Telemetry Log Pages: Not Supported 00:07:53.721 Persistent Event Log Pages: Not Supported 00:07:53.721 Supported Log Pages Log Page: May Support 00:07:53.721 Commands Supported & Effects Log Page: Not Supported 00:07:53.721 Feature Identifiers & Effects Log Page:May Support 00:07:53.721 NVMe-MI Commands & Effects Log Page: May Support 00:07:53.721 Data Area 4 for Telemetry Log: Not Supported 00:07:53.721 Error Log Page Entries Supported: 1 00:07:53.721 Keep Alive: Not Supported 00:07:53.721 00:07:53.721 NVM Command Set Attributes 00:07:53.721 ========================== 00:07:53.721 Submission Queue Entry Size 00:07:53.721 Max: 64 00:07:53.721 Min: 64 00:07:53.721 Completion Queue Entry Size 00:07:53.721 Max: 16 00:07:53.721 Min: 16 00:07:53.721 Number of Namespaces: 256 00:07:53.721 Compare Command: Supported 00:07:53.721 Write Uncorrectable Command: Not Supported 00:07:53.721 Dataset Management Command: Supported 00:07:53.721 Write Zeroes Command: Supported 00:07:53.721 Set Features Save Field: Supported 00:07:53.721 Reservations: Not Supported 00:07:53.721 Timestamp: Supported 00:07:53.721 Copy: Supported 00:07:53.721 Volatile Write Cache: Present 00:07:53.721 Atomic Write Unit (Normal): 1 00:07:53.721 Atomic Write Unit (PFail): 1 00:07:53.721 Atomic Compare & Write Unit: 1 00:07:53.721 Fused Compare & Write: Not Supported 00:07:53.721 Scatter-Gather List 00:07:53.721 SGL Command Set: Supported 00:07:53.721 SGL Keyed: Not Supported 00:07:53.721 SGL Bit Bucket Descriptor: Not Supported 00:07:53.721 SGL Metadata Pointer: Not Supported 00:07:53.721 Oversized SGL: Not Supported 00:07:53.721 SGL Metadata Address: Not Supported 00:07:53.721 SGL Offset: Not Supported 00:07:53.721 Transport SGL Data Block: Not Supported 00:07:53.721 Replay Protected Memory Block: Not Supported 00:07:53.721 00:07:53.721 Firmware Slot Information 00:07:53.721 ========================= 00:07:53.721 Active slot: 1 00:07:53.721 Slot 1 Firmware Revision: 1.0 00:07:53.721 00:07:53.721 00:07:53.721 Commands Supported and Effects 00:07:53.721 ============================== 00:07:53.721 Admin Commands 00:07:53.721 -------------- 00:07:53.721 Delete I/O Submission Queue (00h): Supported 00:07:53.721 Create I/O Submission Queue (01h): Supported 00:07:53.721 Get Log Page (02h): Supported 00:07:53.721 Delete I/O Completion Queue (04h): Supported 00:07:53.721 Create I/O Completion Queue (05h): Supported 00:07:53.721 Identify (06h): Supported 00:07:53.721 Abort (08h): Supported 00:07:53.721 Set Features (09h): Supported 00:07:53.721 Get Features (0Ah): Supported 00:07:53.721 Asynchronous Event Request (0Ch): Supported 00:07:53.721 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:53.721 Directive Send (19h): Supported 00:07:53.721 Directive Receive (1Ah): Supported 00:07:53.721 Virtualization Management (1Ch): Supported 00:07:53.721 Doorbell Buffer Config (7Ch): Supported 00:07:53.721 Format NVM (80h): Supported LBA-Change 00:07:53.721 I/O Commands 00:07:53.721 ------------ 00:07:53.721 Flush (00h): Supported LBA-Change 00:07:53.721 Write (01h): Supported LBA-Change 00:07:53.721 Read (02h): Supported 00:07:53.721 Compare (05h): Supported 00:07:53.721 Write Zeroes (08h): Supported LBA-Change 00:07:53.721 Dataset Management (09h): Supported LBA-Change 00:07:53.721 Unknown (0Ch): Supported 00:07:53.721 Unknown (12h): Supported 00:07:53.721 Copy (19h): Supported LBA-Change 00:07:53.721 Unknown (1Dh): Supported LBA-Change 00:07:53.721 00:07:53.721 Error Log 00:07:53.721 ========= 00:07:53.721 00:07:53.721 Arbitration 00:07:53.721 =========== 00:07:53.721 Arbitration Burst: no limit 00:07:53.721 00:07:53.721 Power Management 00:07:53.721 ================ 00:07:53.721 Number of Power States: 1 00:07:53.721 Current Power State: Power State #0 00:07:53.721 Power State #0: 00:07:53.721 Max Power: 25.00 W 00:07:53.721 Non-Operational State: Operational 00:07:53.721 Entry Latency: 16 microseconds 00:07:53.721 Exit Latency: 4 microseconds 00:07:53.721 Relative Read Throughput: 0 00:07:53.721 Relative Read Latency: 0 00:07:53.721 Relative Write Throughput: 0 00:07:53.721 Relative Write Latency: 0 00:07:53.721 Idle Power: Not Reported 00:07:53.721 Active Power: Not Reported 00:07:53.721 Non-Operational Permissive Mode: Not Supported 00:07:53.721 00:07:53.721 Health Information 00:07:53.721 ================== 00:07:53.721 Critical Warnings: 00:07:53.721 Available Spare Space: OK 00:07:53.721 Temperature: OK 00:07:53.721 Device Reliability: OK 00:07:53.721 Read Only: No 00:07:53.721 Volatile Memory Backup: OK 00:07:53.721 Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.721 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:53.721 Available Spare: 0% 00:07:53.721 Available Spare Threshold: 0% 00:07:53.721 Life Percentage Used: ted unexpected 00:07:53.721 [2024-11-04 02:18:40.755559] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62751 terminated unexpected 00:07:53.721 [2024-11-04 02:18:40.756098] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62751 terminated unexpected 00:07:53.721 [2024-11-04 02:18:40.757013] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62751 terminated unexpected 00:07:53.721 0% 00:07:53.721 Data Units Read: 1051 00:07:53.721 Data Units Written: 980 00:07:53.721 Host Read Commands: 44471 00:07:53.721 Host Write Commands: 43894 00:07:53.721 Controller Busy Time: 0 minutes 00:07:53.721 Power Cycles: 0 00:07:53.721 Power On Hours: 0 hours 00:07:53.721 Unsafe Shutdowns: 0 00:07:53.721 Unrecoverable Media Errors: 0 00:07:53.721 Lifetime Error Log Entries: 0 00:07:53.721 Warning Temperature Time: 0 minutes 00:07:53.721 Critical Temperature Time: 0 minutes 00:07:53.721 00:07:53.721 Number of Queues 00:07:53.721 ================ 00:07:53.721 Number of I/O Submission Queues: 64 00:07:53.721 Number of I/O Completion Queues: 64 00:07:53.721 00:07:53.721 ZNS Specific Controller Data 00:07:53.721 ============================ 00:07:53.721 Zone Append Size Limit: 0 00:07:53.721 00:07:53.721 00:07:53.721 Active Namespaces 00:07:53.721 ================= 00:07:53.721 Namespace ID:1 00:07:53.721 Error Recovery Timeout: Unlimited 00:07:53.721 Command Set Identifier: NVM (00h) 00:07:53.721 Deallocate: Supported 00:07:53.721 Deallocated/Unwritten Error: Supported 00:07:53.721 Deallocated Read Value: All 0x00 00:07:53.721 Deallocate in Write Zeroes: Not Supported 00:07:53.721 Deallocated Guard Field: 0xFFFF 00:07:53.721 Flush: Supported 00:07:53.721 Reservation: Not Supported 00:07:53.721 Namespace Sharing Capabilities: Multiple Controllers 00:07:53.721 Size (in LBAs): 262144 (1GiB) 00:07:53.721 Capacity (in LBAs): 262144 (1GiB) 00:07:53.721 Utilization (in LBAs): 262144 (1GiB) 00:07:53.721 Thin Provisioning: Not Supported 00:07:53.721 Per-NS Atomic Units: No 00:07:53.721 Maximum Single Source Range Length: 128 00:07:53.721 Maximum Copy Length: 128 00:07:53.721 Maximum Source Range Count: 128 00:07:53.721 NGUID/EUI64 Never Reused: No 00:07:53.721 Namespace Write Protected: No 00:07:53.722 Endurance group ID: 1 00:07:53.722 Number of LBA Formats: 8 00:07:53.722 Current LBA Format: LBA Format #04 00:07:53.722 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.722 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.722 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.722 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.722 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.722 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.722 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.722 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.722 00:07:53.722 Get Feature FDP: 00:07:53.722 ================ 00:07:53.722 Enabled: Yes 00:07:53.722 FDP configuration index: 0 00:07:53.722 00:07:53.722 FDP configurations log page 00:07:53.722 =========================== 00:07:53.722 Number of FDP configurations: 1 00:07:53.722 Version: 0 00:07:53.722 Size: 112 00:07:53.722 FDP Configuration Descriptor: 0 00:07:53.722 Descriptor Size: 96 00:07:53.722 Reclaim Group Identifier format: 2 00:07:53.722 FDP Volatile Write Cache: Not Present 00:07:53.722 FDP Configuration: Valid 00:07:53.722 Vendor Specific Size: 0 00:07:53.722 Number of Reclaim Groups: 2 00:07:53.722 Number of Recalim Unit Handles: 8 00:07:53.722 Max Placement Identifiers: 128 00:07:53.722 Number of Namespaces Suppprted: 256 00:07:53.722 Reclaim unit Nominal Size: 6000000 bytes 00:07:53.722 Estimated Reclaim Unit Time Limit: Not Reported 00:07:53.722 RUH Desc #000: RUH Type: Initially Isolated 00:07:53.722 RUH Desc #001: RUH Type: Initially Isolated 00:07:53.722 RUH Desc #002: RUH Type: Initially Isolated 00:07:53.722 RUH Desc #003: RUH Type: Initially Isolated 00:07:53.722 RUH Desc #004: RUH Type: Initially Isolated 00:07:53.722 RUH Desc #005: RUH Type: Initially Isolated 00:07:53.722 RUH Desc #006: RUH Type: Initially Isolated 00:07:53.722 RUH Desc #007: RUH Type: Initially Isolated 00:07:53.722 00:07:53.722 FDP reclaim unit handle usage log page 00:07:53.722 ====================================== 00:07:53.722 Number of Reclaim Unit Handles: 8 00:07:53.722 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:53.722 RUH Usage Desc #001: RUH Attributes: Unused 00:07:53.722 RUH Usage Desc #002: RUH Attributes: Unused 00:07:53.722 RUH Usage Desc #003: RUH Attributes: Unused 00:07:53.722 RUH Usage Desc #004: RUH Attributes: Unused 00:07:53.722 RUH Usage Desc #005: RUH Attributes: Unused 00:07:53.722 RUH Usage Desc #006: RUH Attributes: Unused 00:07:53.722 RUH Usage Desc #007: RUH Attributes: Unused 00:07:53.722 00:07:53.722 FDP statistics log page 00:07:53.722 ======================= 00:07:53.722 Host bytes with metadata written: 611033088 00:07:53.722 Media bytes with metadata written: 614391808 00:07:53.722 Media bytes erased: 0 00:07:53.722 00:07:53.722 FDP events log page 00:07:53.722 =================== 00:07:53.722 Number of FDP events: 0 00:07:53.722 00:07:53.722 NVM Specific Namespace Data 00:07:53.722 =========================== 00:07:53.722 Logical Block Storage Tag Mask: 0 00:07:53.722 Protection Information Capabilities: 00:07:53.722 16b Guard Protection Information Storage Tag Support: No 00:07:53.722 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.722 Storage Tag Check Read Support: No 00:07:53.722 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.722 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.722 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.722 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.722 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.722 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.722 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.722 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.722 ===================================================== 00:07:53.722 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:53.722 ===================================================== 00:07:53.722 Controller Capabilities/Features 00:07:53.722 ================================ 00:07:53.722 Vendor ID: 1b36 00:07:53.722 Subsystem Vendor ID: 1af4 00:07:53.722 Serial Number: 12342 00:07:53.722 Model Number: QEMU NVMe Ctrl 00:07:53.722 Firmware Version: 8.0.0 00:07:53.722 Recommended Arb Burst: 6 00:07:53.722 IEEE OUI Identifier: 00 54 52 00:07:53.722 Multi-path I/O 00:07:53.722 May have multiple subsystem ports: No 00:07:53.722 May have multiple controllers: No 00:07:53.722 Associated with SR-IOV VF: No 00:07:53.722 Max Data Transfer Size: 524288 00:07:53.722 Max Number of Namespaces: 256 00:07:53.722 Max Number of I/O Queues: 64 00:07:53.722 NVMe Specification Version (VS): 1.4 00:07:53.722 NVMe Specification Version (Identify): 1.4 00:07:53.722 Maximum Queue Entries: 2048 00:07:53.722 Contiguous Queues Required: Yes 00:07:53.722 Arbitration Mechanisms Supported 00:07:53.722 Weighted Round Robin: Not Supported 00:07:53.722 Vendor Specific: Not Supported 00:07:53.722 Reset Timeout: 7500 ms 00:07:53.722 Doorbell Stride: 4 bytes 00:07:53.722 NVM Subsystem Reset: Not Supported 00:07:53.722 Command Sets Supported 00:07:53.722 NVM Command Set: Supported 00:07:53.722 Boot Partition: Not Supported 00:07:53.722 Memory Page Size Minimum: 4096 bytes 00:07:53.722 Memory Page Size Maximum: 65536 bytes 00:07:53.722 Persistent Memory Region: Not Supported 00:07:53.722 Optional Asynchronous Events Supported 00:07:53.722 Namespace Attribute Notices: Supported 00:07:53.722 Firmware Activation Notices: Not Supported 00:07:53.722 ANA Change Notices: Not Supported 00:07:53.722 PLE Aggregate Log Change Notices: Not Supported 00:07:53.722 LBA Status Info Alert Notices: Not Supported 00:07:53.722 EGE Aggregate Log Change Notices: Not Supported 00:07:53.722 Normal NVM Subsystem Shutdown event: Not Supported 00:07:53.722 Zone Descriptor Change Notices: Not Supported 00:07:53.722 Discovery Log Change Notices: Not Supported 00:07:53.722 Controller Attributes 00:07:53.722 128-bit Host Identifier: Not Supported 00:07:53.722 Non-Operational Permissive Mode: Not Supported 00:07:53.722 NVM Sets: Not Supported 00:07:53.722 Read Recovery Levels: Not Supported 00:07:53.722 Endurance Groups: Not Supported 00:07:53.722 Predictable Latency Mode: Not Supported 00:07:53.722 Traffic Based Keep ALive: Not Supported 00:07:53.722 Namespace Granularity: Not Supported 00:07:53.722 SQ Associations: Not Supported 00:07:53.722 UUID List: Not Supported 00:07:53.722 Multi-Domain Subsystem: Not Supported 00:07:53.722 Fixed Capacity Management: Not Supported 00:07:53.722 Variable Capacity Management: Not Supported 00:07:53.722 Delete Endurance Group: Not Supported 00:07:53.722 Delete NVM Set: Not Supported 00:07:53.722 Extended LBA Formats Supported: Supported 00:07:53.722 Flexible Data Placement Supported: Not Supported 00:07:53.722 00:07:53.722 Controller Memory Buffer Support 00:07:53.722 ================================ 00:07:53.722 Supported: No 00:07:53.722 00:07:53.722 Persistent Memory Region Support 00:07:53.722 ================================ 00:07:53.722 Supported: No 00:07:53.722 00:07:53.722 Admin Command Set Attributes 00:07:53.722 ============================ 00:07:53.722 Security Send/Receive: Not Supported 00:07:53.722 Format NVM: Supported 00:07:53.722 Firmware Activate/Download: Not Supported 00:07:53.722 Namespace Management: Supported 00:07:53.722 Device Self-Test: Not Supported 00:07:53.722 Directives: Supported 00:07:53.722 NVMe-MI: Not Supported 00:07:53.722 Virtualization Management: Not Supported 00:07:53.722 Doorbell Buffer Config: Supported 00:07:53.722 Get LBA Status Capability: Not Supported 00:07:53.722 Command & Feature Lockdown Capability: Not Supported 00:07:53.722 Abort Command Limit: 4 00:07:53.722 Async Event Request Limit: 4 00:07:53.723 Number of Firmware Slots: N/A 00:07:53.723 Firmware Slot 1 Read-Only: N/A 00:07:53.723 Firmware Activation Without Reset: N/A 00:07:53.723 Multiple Update Detection Support: N/A 00:07:53.723 Firmware Update Granularity: No Information Provided 00:07:53.723 Per-Namespace SMART Log: Yes 00:07:53.723 Asymmetric Namespace Access Log Page: Not Supported 00:07:53.723 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:53.723 Command Effects Log Page: Supported 00:07:53.723 Get Log Page Extended Data: Supported 00:07:53.723 Telemetry Log Pages: Not Supported 00:07:53.723 Persistent Event Log Pages: Not Supported 00:07:53.723 Supported Log Pages Log Page: May Support 00:07:53.723 Commands Supported & Effects Log Page: Not Supported 00:07:53.723 Feature Identifiers & Effects Log Page:May Support 00:07:53.723 NVMe-MI Commands & Effects Log Page: May Support 00:07:53.723 Data Area 4 for Telemetry Log: Not Supported 00:07:53.723 Error Log Page Entries Supported: 1 00:07:53.723 Keep Alive: Not Supported 00:07:53.723 00:07:53.723 NVM Command Set Attributes 00:07:53.723 ========================== 00:07:53.723 Submission Queue Entry Size 00:07:53.723 Max: 64 00:07:53.723 Min: 64 00:07:53.723 Completion Queue Entry Size 00:07:53.723 Max: 16 00:07:53.723 Min: 16 00:07:53.723 Number of Namespaces: 256 00:07:53.723 Compare Command: Supported 00:07:53.723 Write Uncorrectable Command: Not Supported 00:07:53.723 Dataset Management Command: Supported 00:07:53.723 Write Zeroes Command: Supported 00:07:53.723 Set Features Save Field: Supported 00:07:53.723 Reservations: Not Supported 00:07:53.723 Timestamp: Supported 00:07:53.723 Copy: Supported 00:07:53.723 Volatile Write Cache: Present 00:07:53.723 Atomic Write Unit (Normal): 1 00:07:53.723 Atomic Write Unit (PFail): 1 00:07:53.723 Atomic Compare & Write Unit: 1 00:07:53.723 Fused Compare & Write: Not Supported 00:07:53.723 Scatter-Gather List 00:07:53.723 SGL Command Set: Supported 00:07:53.723 SGL Keyed: Not Supported 00:07:53.723 SGL Bit Bucket Descriptor: Not Supported 00:07:53.723 SGL Metadata Pointer: Not Supported 00:07:53.723 Oversized SGL: Not Supported 00:07:53.723 SGL Metadata Address: Not Supported 00:07:53.723 SGL Offset: Not Supported 00:07:53.723 Transport SGL Data Block: Not Supported 00:07:53.723 Replay Protected Memory Block: Not Supported 00:07:53.723 00:07:53.723 Firmware Slot Information 00:07:53.723 ========================= 00:07:53.723 Active slot: 1 00:07:53.723 Slot 1 Firmware Revision: 1.0 00:07:53.723 00:07:53.723 00:07:53.723 Commands Supported and Effects 00:07:53.723 ============================== 00:07:53.723 Admin Commands 00:07:53.723 -------------- 00:07:53.723 Delete I/O Submission Queue (00h): Supported 00:07:53.723 Create I/O Submission Queue (01h): Supported 00:07:53.723 Get Log Page (02h): Supported 00:07:53.723 Delete I/O Completion Queue (04h): Supported 00:07:53.723 Create I/O Completion Queue (05h): Supported 00:07:53.723 Identify (06h): Supported 00:07:53.723 Abort (08h): Supported 00:07:53.723 Set Features (09h): Supported 00:07:53.723 Get Features (0Ah): Supported 00:07:53.723 Asynchronous Event Request (0Ch): Supported 00:07:53.723 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:53.723 Directive Send (19h): Supported 00:07:53.723 Directive Receive (1Ah): Supported 00:07:53.723 Virtualization Management (1Ch): Supported 00:07:53.723 Doorbell Buffer Config (7Ch): Supported 00:07:53.723 Format NVM (80h): Supported LBA-Change 00:07:53.723 I/O Commands 00:07:53.723 ------------ 00:07:53.723 Flush (00h): Supported LBA-Change 00:07:53.723 Write (01h): Supported LBA-Change 00:07:53.723 Read (02h): Supported 00:07:53.723 Compare (05h): Supported 00:07:53.723 Write Zeroes (08h): Supported LBA-Change 00:07:53.723 Dataset Management (09h): Supported LBA-Change 00:07:53.723 Unknown (0Ch): Supported 00:07:53.723 Unknown (12h): Supported 00:07:53.723 Copy (19h): Supported LBA-Change 00:07:53.723 Unknown (1Dh): Supported LBA-Change 00:07:53.723 00:07:53.723 Error Log 00:07:53.723 ========= 00:07:53.723 00:07:53.723 Arbitration 00:07:53.723 =========== 00:07:53.723 Arbitration Burst: no limit 00:07:53.723 00:07:53.723 Power Management 00:07:53.723 ================ 00:07:53.723 Number of Power States: 1 00:07:53.723 Current Power State: Power State #0 00:07:53.723 Power State #0: 00:07:53.723 Max Power: 25.00 W 00:07:53.723 Non-Operational State: Operational 00:07:53.723 Entry Latency: 16 microseconds 00:07:53.723 Exit Latency: 4 microseconds 00:07:53.723 Relative Read Throughput: 0 00:07:53.723 Relative Read Latency: 0 00:07:53.723 Relative Write Throughput: 0 00:07:53.723 Relative Write Latency: 0 00:07:53.723 Idle Power: Not Reported 00:07:53.723 Active Power: Not Reported 00:07:53.723 Non-Operational Permissive Mode: Not Supported 00:07:53.723 00:07:53.723 Health Information 00:07:53.723 ================== 00:07:53.723 Critical Warnings: 00:07:53.723 Available Spare Space: OK 00:07:53.723 Temperature: OK 00:07:53.723 Device Reliability: OK 00:07:53.723 Read Only: No 00:07:53.723 Volatile Memory Backup: OK 00:07:53.723 Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.723 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:53.723 Available Spare: 0% 00:07:53.723 Available Spare Threshold: 0% 00:07:53.723 Life Percentage Used: 0% 00:07:53.723 Data Units Read: 2416 00:07:53.723 Data Units Written: 2203 00:07:53.723 Host Read Commands: 127652 00:07:53.723 Host Write Commands: 125921 00:07:53.723 Controller Busy Time: 0 minutes 00:07:53.723 Power Cycles: 0 00:07:53.723 Power On Hours: 0 hours 00:07:53.723 Unsafe Shutdowns: 0 00:07:53.723 Unrecoverable Media Errors: 0 00:07:53.723 Lifetime Error Log Entries: 0 00:07:53.723 Warning Temperature Time: 0 minutes 00:07:53.723 Critical Temperature Time: 0 minutes 00:07:53.723 00:07:53.723 Number of Queues 00:07:53.723 ================ 00:07:53.723 Number of I/O Submission Queues: 64 00:07:53.723 Number of I/O Completion Queues: 64 00:07:53.723 00:07:53.723 ZNS Specific Controller Data 00:07:53.723 ============================ 00:07:53.723 Zone Append Size Limit: 0 00:07:53.723 00:07:53.723 00:07:53.723 Active Namespaces 00:07:53.723 ================= 00:07:53.723 Namespace ID:1 00:07:53.723 Error Recovery Timeout: Unlimited 00:07:53.723 Command Set Identifier: NVM (00h) 00:07:53.723 Deallocate: Supported 00:07:53.723 Deallocated/Unwritten Error: Supported 00:07:53.723 Deallocated Read Value: All 0x00 00:07:53.723 Deallocate in Write Zeroes: Not Supported 00:07:53.723 Deallocated Guard Field: 0xFFFF 00:07:53.723 Flush: Supported 00:07:53.723 Reservation: Not Supported 00:07:53.723 Namespace Sharing Capabilities: Private 00:07:53.723 Size (in LBAs): 1048576 (4GiB) 00:07:53.723 Capacity (in LBAs): 1048576 (4GiB) 00:07:53.723 Utilization (in LBAs): 1048576 (4GiB) 00:07:53.723 Thin Provisioning: Not Supported 00:07:53.723 Per-NS Atomic Units: No 00:07:53.723 Maximum Single Source Range Length: 128 00:07:53.723 Maximum Copy Length: 128 00:07:53.723 Maximum Source Range Count: 128 00:07:53.723 NGUID/EUI64 Never Reused: No 00:07:53.723 Namespace Write Protected: No 00:07:53.723 Number of LBA Formats: 8 00:07:53.723 Current LBA Format: LBA Format #04 00:07:53.723 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.723 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.723 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.723 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.723 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.723 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.723 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.723 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.723 00:07:53.723 NVM Specific Namespace Data 00:07:53.723 =========================== 00:07:53.723 Logical Block Storage Tag Mask: 0 00:07:53.723 Protection Information Capabilities: 00:07:53.723 16b Guard Protection Information Storage Tag Support: No 00:07:53.723 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.723 Storage Tag Check Read Support: No 00:07:53.723 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.723 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.723 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.723 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.723 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.723 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.723 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.723 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.723 Namespace ID:2 00:07:53.723 Error Recovery Timeout: Unlimited 00:07:53.723 Command Set Identifier: NVM (00h) 00:07:53.723 Deallocate: Supported 00:07:53.723 Deallocated/Unwritten Error: Supported 00:07:53.723 Deallocated Read Value: All 0x00 00:07:53.724 Deallocate in Write Zeroes: Not Supported 00:07:53.724 Deallocated Guard Field: 0xFFFF 00:07:53.724 Flush: Supported 00:07:53.724 Reservation: Not Supported 00:07:53.724 Namespace Sharing Capabilities: Private 00:07:53.724 Size (in LBAs): 1048576 (4GiB) 00:07:53.724 Capacity (in LBAs): 1048576 (4GiB) 00:07:53.724 Utilization (in LBAs): 1048576 (4GiB) 00:07:53.724 Thin Provisioning: Not Supported 00:07:53.724 Per-NS Atomic Units: No 00:07:53.724 Maximum Single Source Range Length: 128 00:07:53.724 Maximum Copy Length: 128 00:07:53.724 Maximum Source Range Count: 128 00:07:53.724 NGUID/EUI64 Never Reused: No 00:07:53.724 Namespace Write Protected: No 00:07:53.724 Number of LBA Formats: 8 00:07:53.724 Current LBA Format: LBA Format #04 00:07:53.724 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.724 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.724 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.724 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.724 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.724 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.724 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.724 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.724 00:07:53.724 NVM Specific Namespace Data 00:07:53.724 =========================== 00:07:53.724 Logical Block Storage Tag Mask: 0 00:07:53.724 Protection Information Capabilities: 00:07:53.724 16b Guard Protection Information Storage Tag Support: No 00:07:53.724 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.724 Storage Tag Check Read Support: No 00:07:53.724 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Namespace ID:3 00:07:53.724 Error Recovery Timeout: Unlimited 00:07:53.724 Command Set Identifier: NVM (00h) 00:07:53.724 Deallocate: Supported 00:07:53.724 Deallocated/Unwritten Error: Supported 00:07:53.724 Deallocated Read Value: All 0x00 00:07:53.724 Deallocate in Write Zeroes: Not Supported 00:07:53.724 Deallocated Guard Field: 0xFFFF 00:07:53.724 Flush: Supported 00:07:53.724 Reservation: Not Supported 00:07:53.724 Namespace Sharing Capabilities: Private 00:07:53.724 Size (in LBAs): 1048576 (4GiB) 00:07:53.724 Capacity (in LBAs): 1048576 (4GiB) 00:07:53.724 Utilization (in LBAs): 1048576 (4GiB) 00:07:53.724 Thin Provisioning: Not Supported 00:07:53.724 Per-NS Atomic Units: No 00:07:53.724 Maximum Single Source Range Length: 128 00:07:53.724 Maximum Copy Length: 128 00:07:53.724 Maximum Source Range Count: 128 00:07:53.724 NGUID/EUI64 Never Reused: No 00:07:53.724 Namespace Write Protected: No 00:07:53.724 Number of LBA Formats: 8 00:07:53.724 Current LBA Format: LBA Format #04 00:07:53.724 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.724 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.724 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.724 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.724 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.724 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.724 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.724 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.724 00:07:53.724 NVM Specific Namespace Data 00:07:53.724 =========================== 00:07:53.724 Logical Block Storage Tag Mask: 0 00:07:53.724 Protection Information Capabilities: 00:07:53.724 16b Guard Protection Information Storage Tag Support: No 00:07:53.724 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.724 Storage Tag Check Read Support: No 00:07:53.724 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.724 02:18:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:53.724 02:18:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:53.983 ===================================================== 00:07:53.983 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:53.983 ===================================================== 00:07:53.983 Controller Capabilities/Features 00:07:53.983 ================================ 00:07:53.983 Vendor ID: 1b36 00:07:53.983 Subsystem Vendor ID: 1af4 00:07:53.983 Serial Number: 12340 00:07:53.983 Model Number: QEMU NVMe Ctrl 00:07:53.983 Firmware Version: 8.0.0 00:07:53.983 Recommended Arb Burst: 6 00:07:53.983 IEEE OUI Identifier: 00 54 52 00:07:53.983 Multi-path I/O 00:07:53.983 May have multiple subsystem ports: No 00:07:53.983 May have multiple controllers: No 00:07:53.983 Associated with SR-IOV VF: No 00:07:53.983 Max Data Transfer Size: 524288 00:07:53.983 Max Number of Namespaces: 256 00:07:53.983 Max Number of I/O Queues: 64 00:07:53.983 NVMe Specification Version (VS): 1.4 00:07:53.983 NVMe Specification Version (Identify): 1.4 00:07:53.983 Maximum Queue Entries: 2048 00:07:53.983 Contiguous Queues Required: Yes 00:07:53.983 Arbitration Mechanisms Supported 00:07:53.983 Weighted Round Robin: Not Supported 00:07:53.983 Vendor Specific: Not Supported 00:07:53.983 Reset Timeout: 7500 ms 00:07:53.983 Doorbell Stride: 4 bytes 00:07:53.983 NVM Subsystem Reset: Not Supported 00:07:53.983 Command Sets Supported 00:07:53.983 NVM Command Set: Supported 00:07:53.983 Boot Partition: Not Supported 00:07:53.983 Memory Page Size Minimum: 4096 bytes 00:07:53.983 Memory Page Size Maximum: 65536 bytes 00:07:53.983 Persistent Memory Region: Not Supported 00:07:53.983 Optional Asynchronous Events Supported 00:07:53.983 Namespace Attribute Notices: Supported 00:07:53.983 Firmware Activation Notices: Not Supported 00:07:53.983 ANA Change Notices: Not Supported 00:07:53.983 PLE Aggregate Log Change Notices: Not Supported 00:07:53.983 LBA Status Info Alert Notices: Not Supported 00:07:53.983 EGE Aggregate Log Change Notices: Not Supported 00:07:53.983 Normal NVM Subsystem Shutdown event: Not Supported 00:07:53.983 Zone Descriptor Change Notices: Not Supported 00:07:53.983 Discovery Log Change Notices: Not Supported 00:07:53.983 Controller Attributes 00:07:53.983 128-bit Host Identifier: Not Supported 00:07:53.983 Non-Operational Permissive Mode: Not Supported 00:07:53.983 NVM Sets: Not Supported 00:07:53.983 Read Recovery Levels: Not Supported 00:07:53.983 Endurance Groups: Not Supported 00:07:53.983 Predictable Latency Mode: Not Supported 00:07:53.983 Traffic Based Keep ALive: Not Supported 00:07:53.983 Namespace Granularity: Not Supported 00:07:53.983 SQ Associations: Not Supported 00:07:53.983 UUID List: Not Supported 00:07:53.983 Multi-Domain Subsystem: Not Supported 00:07:53.983 Fixed Capacity Management: Not Supported 00:07:53.983 Variable Capacity Management: Not Supported 00:07:53.983 Delete Endurance Group: Not Supported 00:07:53.983 Delete NVM Set: Not Supported 00:07:53.983 Extended LBA Formats Supported: Supported 00:07:53.983 Flexible Data Placement Supported: Not Supported 00:07:53.983 00:07:53.983 Controller Memory Buffer Support 00:07:53.983 ================================ 00:07:53.983 Supported: No 00:07:53.983 00:07:53.983 Persistent Memory Region Support 00:07:53.983 ================================ 00:07:53.983 Supported: No 00:07:53.983 00:07:53.983 Admin Command Set Attributes 00:07:53.983 ============================ 00:07:53.983 Security Send/Receive: Not Supported 00:07:53.983 Format NVM: Supported 00:07:53.983 Firmware Activate/Download: Not Supported 00:07:53.983 Namespace Management: Supported 00:07:53.983 Device Self-Test: Not Supported 00:07:53.983 Directives: Supported 00:07:53.983 NVMe-MI: Not Supported 00:07:53.984 Virtualization Management: Not Supported 00:07:53.984 Doorbell Buffer Config: Supported 00:07:53.984 Get LBA Status Capability: Not Supported 00:07:53.984 Command & Feature Lockdown Capability: Not Supported 00:07:53.984 Abort Command Limit: 4 00:07:53.984 Async Event Request Limit: 4 00:07:53.984 Number of Firmware Slots: N/A 00:07:53.984 Firmware Slot 1 Read-Only: N/A 00:07:53.984 Firmware Activation Without Reset: N/A 00:07:53.984 Multiple Update Detection Support: N/A 00:07:53.984 Firmware Update Granularity: No Information Provided 00:07:53.984 Per-Namespace SMART Log: Yes 00:07:53.984 Asymmetric Namespace Access Log Page: Not Supported 00:07:53.984 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:53.984 Command Effects Log Page: Supported 00:07:53.984 Get Log Page Extended Data: Supported 00:07:53.984 Telemetry Log Pages: Not Supported 00:07:53.984 Persistent Event Log Pages: Not Supported 00:07:53.984 Supported Log Pages Log Page: May Support 00:07:53.984 Commands Supported & Effects Log Page: Not Supported 00:07:53.984 Feature Identifiers & Effects Log Page:May Support 00:07:53.984 NVMe-MI Commands & Effects Log Page: May Support 00:07:53.984 Data Area 4 for Telemetry Log: Not Supported 00:07:53.984 Error Log Page Entries Supported: 1 00:07:53.984 Keep Alive: Not Supported 00:07:53.984 00:07:53.984 NVM Command Set Attributes 00:07:53.984 ========================== 00:07:53.984 Submission Queue Entry Size 00:07:53.984 Max: 64 00:07:53.984 Min: 64 00:07:53.984 Completion Queue Entry Size 00:07:53.984 Max: 16 00:07:53.984 Min: 16 00:07:53.984 Number of Namespaces: 256 00:07:53.984 Compare Command: Supported 00:07:53.984 Write Uncorrectable Command: Not Supported 00:07:53.984 Dataset Management Command: Supported 00:07:53.984 Write Zeroes Command: Supported 00:07:53.984 Set Features Save Field: Supported 00:07:53.984 Reservations: Not Supported 00:07:53.984 Timestamp: Supported 00:07:53.984 Copy: Supported 00:07:53.984 Volatile Write Cache: Present 00:07:53.984 Atomic Write Unit (Normal): 1 00:07:53.984 Atomic Write Unit (PFail): 1 00:07:53.984 Atomic Compare & Write Unit: 1 00:07:53.984 Fused Compare & Write: Not Supported 00:07:53.984 Scatter-Gather List 00:07:53.984 SGL Command Set: Supported 00:07:53.984 SGL Keyed: Not Supported 00:07:53.984 SGL Bit Bucket Descriptor: Not Supported 00:07:53.984 SGL Metadata Pointer: Not Supported 00:07:53.984 Oversized SGL: Not Supported 00:07:53.984 SGL Metadata Address: Not Supported 00:07:53.984 SGL Offset: Not Supported 00:07:53.984 Transport SGL Data Block: Not Supported 00:07:53.984 Replay Protected Memory Block: Not Supported 00:07:53.984 00:07:53.984 Firmware Slot Information 00:07:53.984 ========================= 00:07:53.984 Active slot: 1 00:07:53.984 Slot 1 Firmware Revision: 1.0 00:07:53.984 00:07:53.984 00:07:53.984 Commands Supported and Effects 00:07:53.984 ============================== 00:07:53.984 Admin Commands 00:07:53.984 -------------- 00:07:53.984 Delete I/O Submission Queue (00h): Supported 00:07:53.984 Create I/O Submission Queue (01h): Supported 00:07:53.984 Get Log Page (02h): Supported 00:07:53.984 Delete I/O Completion Queue (04h): Supported 00:07:53.984 Create I/O Completion Queue (05h): Supported 00:07:53.984 Identify (06h): Supported 00:07:53.984 Abort (08h): Supported 00:07:53.984 Set Features (09h): Supported 00:07:53.984 Get Features (0Ah): Supported 00:07:53.984 Asynchronous Event Request (0Ch): Supported 00:07:53.984 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:53.984 Directive Send (19h): Supported 00:07:53.984 Directive Receive (1Ah): Supported 00:07:53.984 Virtualization Management (1Ch): Supported 00:07:53.984 Doorbell Buffer Config (7Ch): Supported 00:07:53.984 Format NVM (80h): Supported LBA-Change 00:07:53.984 I/O Commands 00:07:53.984 ------------ 00:07:53.984 Flush (00h): Supported LBA-Change 00:07:53.984 Write (01h): Supported LBA-Change 00:07:53.984 Read (02h): Supported 00:07:53.984 Compare (05h): Supported 00:07:53.984 Write Zeroes (08h): Supported LBA-Change 00:07:53.984 Dataset Management (09h): Supported LBA-Change 00:07:53.984 Unknown (0Ch): Supported 00:07:53.984 Unknown (12h): Supported 00:07:53.984 Copy (19h): Supported LBA-Change 00:07:53.984 Unknown (1Dh): Supported LBA-Change 00:07:53.984 00:07:53.984 Error Log 00:07:53.984 ========= 00:07:53.984 00:07:53.984 Arbitration 00:07:53.984 =========== 00:07:53.984 Arbitration Burst: no limit 00:07:53.984 00:07:53.984 Power Management 00:07:53.984 ================ 00:07:53.984 Number of Power States: 1 00:07:53.984 Current Power State: Power State #0 00:07:53.984 Power State #0: 00:07:53.984 Max Power: 25.00 W 00:07:53.984 Non-Operational State: Operational 00:07:53.984 Entry Latency: 16 microseconds 00:07:53.984 Exit Latency: 4 microseconds 00:07:53.984 Relative Read Throughput: 0 00:07:53.984 Relative Read Latency: 0 00:07:53.984 Relative Write Throughput: 0 00:07:53.984 Relative Write Latency: 0 00:07:53.984 Idle Power: Not Reported 00:07:53.984 Active Power: Not Reported 00:07:53.984 Non-Operational Permissive Mode: Not Supported 00:07:53.984 00:07:53.984 Health Information 00:07:53.984 ================== 00:07:53.984 Critical Warnings: 00:07:53.984 Available Spare Space: OK 00:07:53.984 Temperature: OK 00:07:53.984 Device Reliability: OK 00:07:53.984 Read Only: No 00:07:53.984 Volatile Memory Backup: OK 00:07:53.984 Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.984 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:53.984 Available Spare: 0% 00:07:53.984 Available Spare Threshold: 0% 00:07:53.984 Life Percentage Used: 0% 00:07:53.984 Data Units Read: 695 00:07:53.984 Data Units Written: 623 00:07:53.984 Host Read Commands: 41565 00:07:53.984 Host Write Commands: 41351 00:07:53.984 Controller Busy Time: 0 minutes 00:07:53.984 Power Cycles: 0 00:07:53.984 Power On Hours: 0 hours 00:07:53.984 Unsafe Shutdowns: 0 00:07:53.984 Unrecoverable Media Errors: 0 00:07:53.984 Lifetime Error Log Entries: 0 00:07:53.984 Warning Temperature Time: 0 minutes 00:07:53.984 Critical Temperature Time: 0 minutes 00:07:53.984 00:07:53.984 Number of Queues 00:07:53.984 ================ 00:07:53.984 Number of I/O Submission Queues: 64 00:07:53.984 Number of I/O Completion Queues: 64 00:07:53.984 00:07:53.984 ZNS Specific Controller Data 00:07:53.984 ============================ 00:07:53.984 Zone Append Size Limit: 0 00:07:53.984 00:07:53.984 00:07:53.984 Active Namespaces 00:07:53.984 ================= 00:07:53.984 Namespace ID:1 00:07:53.984 Error Recovery Timeout: Unlimited 00:07:53.984 Command Set Identifier: NVM (00h) 00:07:53.984 Deallocate: Supported 00:07:53.984 Deallocated/Unwritten Error: Supported 00:07:53.984 Deallocated Read Value: All 0x00 00:07:53.984 Deallocate in Write Zeroes: Not Supported 00:07:53.984 Deallocated Guard Field: 0xFFFF 00:07:53.984 Flush: Supported 00:07:53.984 Reservation: Not Supported 00:07:53.984 Metadata Transferred as: Separate Metadata Buffer 00:07:53.984 Namespace Sharing Capabilities: Private 00:07:53.984 Size (in LBAs): 1548666 (5GiB) 00:07:53.984 Capacity (in LBAs): 1548666 (5GiB) 00:07:53.984 Utilization (in LBAs): 1548666 (5GiB) 00:07:53.984 Thin Provisioning: Not Supported 00:07:53.984 Per-NS Atomic Units: No 00:07:53.984 Maximum Single Source Range Length: 128 00:07:53.984 Maximum Copy Length: 128 00:07:53.984 Maximum Source Range Count: 128 00:07:53.984 NGUID/EUI64 Never Reused: No 00:07:53.984 Namespace Write Protected: No 00:07:53.984 Number of LBA Formats: 8 00:07:53.984 Current LBA Format: LBA Format #07 00:07:53.984 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:53.984 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:53.984 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:53.984 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:53.984 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:53.984 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:53.984 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:53.984 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:53.984 00:07:53.984 NVM Specific Namespace Data 00:07:53.984 =========================== 00:07:53.984 Logical Block Storage Tag Mask: 0 00:07:53.984 Protection Information Capabilities: 00:07:53.984 16b Guard Protection Information Storage Tag Support: No 00:07:53.984 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:53.984 Storage Tag Check Read Support: No 00:07:53.984 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.984 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.984 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.984 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.985 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.985 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.985 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.985 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:53.985 02:18:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:53.985 02:18:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:54.243 ===================================================== 00:07:54.243 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.243 ===================================================== 00:07:54.243 Controller Capabilities/Features 00:07:54.243 ================================ 00:07:54.243 Vendor ID: 1b36 00:07:54.243 Subsystem Vendor ID: 1af4 00:07:54.243 Serial Number: 12341 00:07:54.243 Model Number: QEMU NVMe Ctrl 00:07:54.243 Firmware Version: 8.0.0 00:07:54.243 Recommended Arb Burst: 6 00:07:54.243 IEEE OUI Identifier: 00 54 52 00:07:54.243 Multi-path I/O 00:07:54.243 May have multiple subsystem ports: No 00:07:54.243 May have multiple controllers: No 00:07:54.243 Associated with SR-IOV VF: No 00:07:54.243 Max Data Transfer Size: 524288 00:07:54.243 Max Number of Namespaces: 256 00:07:54.243 Max Number of I/O Queues: 64 00:07:54.243 NVMe Specification Version (VS): 1.4 00:07:54.243 NVMe Specification Version (Identify): 1.4 00:07:54.243 Maximum Queue Entries: 2048 00:07:54.243 Contiguous Queues Required: Yes 00:07:54.243 Arbitration Mechanisms Supported 00:07:54.243 Weighted Round Robin: Not Supported 00:07:54.243 Vendor Specific: Not Supported 00:07:54.243 Reset Timeout: 7500 ms 00:07:54.243 Doorbell Stride: 4 bytes 00:07:54.243 NVM Subsystem Reset: Not Supported 00:07:54.243 Command Sets Supported 00:07:54.243 NVM Command Set: Supported 00:07:54.243 Boot Partition: Not Supported 00:07:54.243 Memory Page Size Minimum: 4096 bytes 00:07:54.243 Memory Page Size Maximum: 65536 bytes 00:07:54.243 Persistent Memory Region: Not Supported 00:07:54.243 Optional Asynchronous Events Supported 00:07:54.243 Namespace Attribute Notices: Supported 00:07:54.243 Firmware Activation Notices: Not Supported 00:07:54.243 ANA Change Notices: Not Supported 00:07:54.243 PLE Aggregate Log Change Notices: Not Supported 00:07:54.243 LBA Status Info Alert Notices: Not Supported 00:07:54.243 EGE Aggregate Log Change Notices: Not Supported 00:07:54.243 Normal NVM Subsystem Shutdown event: Not Supported 00:07:54.243 Zone Descriptor Change Notices: Not Supported 00:07:54.243 Discovery Log Change Notices: Not Supported 00:07:54.243 Controller Attributes 00:07:54.243 128-bit Host Identifier: Not Supported 00:07:54.243 Non-Operational Permissive Mode: Not Supported 00:07:54.243 NVM Sets: Not Supported 00:07:54.243 Read Recovery Levels: Not Supported 00:07:54.243 Endurance Groups: Not Supported 00:07:54.243 Predictable Latency Mode: Not Supported 00:07:54.243 Traffic Based Keep ALive: Not Supported 00:07:54.243 Namespace Granularity: Not Supported 00:07:54.243 SQ Associations: Not Supported 00:07:54.243 UUID List: Not Supported 00:07:54.243 Multi-Domain Subsystem: Not Supported 00:07:54.243 Fixed Capacity Management: Not Supported 00:07:54.243 Variable Capacity Management: Not Supported 00:07:54.243 Delete Endurance Group: Not Supported 00:07:54.243 Delete NVM Set: Not Supported 00:07:54.243 Extended LBA Formats Supported: Supported 00:07:54.243 Flexible Data Placement Supported: Not Supported 00:07:54.243 00:07:54.243 Controller Memory Buffer Support 00:07:54.243 ================================ 00:07:54.243 Supported: No 00:07:54.243 00:07:54.243 Persistent Memory Region Support 00:07:54.243 ================================ 00:07:54.243 Supported: No 00:07:54.243 00:07:54.243 Admin Command Set Attributes 00:07:54.243 ============================ 00:07:54.243 Security Send/Receive: Not Supported 00:07:54.243 Format NVM: Supported 00:07:54.243 Firmware Activate/Download: Not Supported 00:07:54.243 Namespace Management: Supported 00:07:54.243 Device Self-Test: Not Supported 00:07:54.243 Directives: Supported 00:07:54.243 NVMe-MI: Not Supported 00:07:54.243 Virtualization Management: Not Supported 00:07:54.243 Doorbell Buffer Config: Supported 00:07:54.243 Get LBA Status Capability: Not Supported 00:07:54.243 Command & Feature Lockdown Capability: Not Supported 00:07:54.243 Abort Command Limit: 4 00:07:54.243 Async Event Request Limit: 4 00:07:54.243 Number of Firmware Slots: N/A 00:07:54.243 Firmware Slot 1 Read-Only: N/A 00:07:54.243 Firmware Activation Without Reset: N/A 00:07:54.243 Multiple Update Detection Support: N/A 00:07:54.243 Firmware Update Granularity: No Information Provided 00:07:54.243 Per-Namespace SMART Log: Yes 00:07:54.243 Asymmetric Namespace Access Log Page: Not Supported 00:07:54.243 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:54.243 Command Effects Log Page: Supported 00:07:54.243 Get Log Page Extended Data: Supported 00:07:54.243 Telemetry Log Pages: Not Supported 00:07:54.243 Persistent Event Log Pages: Not Supported 00:07:54.243 Supported Log Pages Log Page: May Support 00:07:54.243 Commands Supported & Effects Log Page: Not Supported 00:07:54.243 Feature Identifiers & Effects Log Page:May Support 00:07:54.243 NVMe-MI Commands & Effects Log Page: May Support 00:07:54.243 Data Area 4 for Telemetry Log: Not Supported 00:07:54.243 Error Log Page Entries Supported: 1 00:07:54.243 Keep Alive: Not Supported 00:07:54.243 00:07:54.243 NVM Command Set Attributes 00:07:54.243 ========================== 00:07:54.243 Submission Queue Entry Size 00:07:54.243 Max: 64 00:07:54.243 Min: 64 00:07:54.243 Completion Queue Entry Size 00:07:54.243 Max: 16 00:07:54.243 Min: 16 00:07:54.243 Number of Namespaces: 256 00:07:54.243 Compare Command: Supported 00:07:54.243 Write Uncorrectable Command: Not Supported 00:07:54.243 Dataset Management Command: Supported 00:07:54.243 Write Zeroes Command: Supported 00:07:54.243 Set Features Save Field: Supported 00:07:54.243 Reservations: Not Supported 00:07:54.243 Timestamp: Supported 00:07:54.243 Copy: Supported 00:07:54.243 Volatile Write Cache: Present 00:07:54.243 Atomic Write Unit (Normal): 1 00:07:54.243 Atomic Write Unit (PFail): 1 00:07:54.243 Atomic Compare & Write Unit: 1 00:07:54.243 Fused Compare & Write: Not Supported 00:07:54.243 Scatter-Gather List 00:07:54.243 SGL Command Set: Supported 00:07:54.243 SGL Keyed: Not Supported 00:07:54.243 SGL Bit Bucket Descriptor: Not Supported 00:07:54.243 SGL Metadata Pointer: Not Supported 00:07:54.243 Oversized SGL: Not Supported 00:07:54.243 SGL Metadata Address: Not Supported 00:07:54.243 SGL Offset: Not Supported 00:07:54.243 Transport SGL Data Block: Not Supported 00:07:54.243 Replay Protected Memory Block: Not Supported 00:07:54.243 00:07:54.243 Firmware Slot Information 00:07:54.243 ========================= 00:07:54.243 Active slot: 1 00:07:54.243 Slot 1 Firmware Revision: 1.0 00:07:54.243 00:07:54.243 00:07:54.243 Commands Supported and Effects 00:07:54.243 ============================== 00:07:54.243 Admin Commands 00:07:54.243 -------------- 00:07:54.243 Delete I/O Submission Queue (00h): Supported 00:07:54.243 Create I/O Submission Queue (01h): Supported 00:07:54.243 Get Log Page (02h): Supported 00:07:54.244 Delete I/O Completion Queue (04h): Supported 00:07:54.244 Create I/O Completion Queue (05h): Supported 00:07:54.244 Identify (06h): Supported 00:07:54.244 Abort (08h): Supported 00:07:54.244 Set Features (09h): Supported 00:07:54.244 Get Features (0Ah): Supported 00:07:54.244 Asynchronous Event Request (0Ch): Supported 00:07:54.244 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:54.244 Directive Send (19h): Supported 00:07:54.244 Directive Receive (1Ah): Supported 00:07:54.244 Virtualization Management (1Ch): Supported 00:07:54.244 Doorbell Buffer Config (7Ch): Supported 00:07:54.244 Format NVM (80h): Supported LBA-Change 00:07:54.244 I/O Commands 00:07:54.244 ------------ 00:07:54.244 Flush (00h): Supported LBA-Change 00:07:54.244 Write (01h): Supported LBA-Change 00:07:54.244 Read (02h): Supported 00:07:54.244 Compare (05h): Supported 00:07:54.244 Write Zeroes (08h): Supported LBA-Change 00:07:54.244 Dataset Management (09h): Supported LBA-Change 00:07:54.244 Unknown (0Ch): Supported 00:07:54.244 Unknown (12h): Supported 00:07:54.244 Copy (19h): Supported LBA-Change 00:07:54.244 Unknown (1Dh): Supported LBA-Change 00:07:54.244 00:07:54.244 Error Log 00:07:54.244 ========= 00:07:54.244 00:07:54.244 Arbitration 00:07:54.244 =========== 00:07:54.244 Arbitration Burst: no limit 00:07:54.244 00:07:54.244 Power Management 00:07:54.244 ================ 00:07:54.244 Number of Power States: 1 00:07:54.244 Current Power State: Power State #0 00:07:54.244 Power State #0: 00:07:54.244 Max Power: 25.00 W 00:07:54.244 Non-Operational State: Operational 00:07:54.244 Entry Latency: 16 microseconds 00:07:54.244 Exit Latency: 4 microseconds 00:07:54.244 Relative Read Throughput: 0 00:07:54.244 Relative Read Latency: 0 00:07:54.244 Relative Write Throughput: 0 00:07:54.244 Relative Write Latency: 0 00:07:54.244 Idle Power: Not Reported 00:07:54.244 Active Power: Not Reported 00:07:54.244 Non-Operational Permissive Mode: Not Supported 00:07:54.244 00:07:54.244 Health Information 00:07:54.244 ================== 00:07:54.244 Critical Warnings: 00:07:54.244 Available Spare Space: OK 00:07:54.244 Temperature: OK 00:07:54.244 Device Reliability: OK 00:07:54.244 Read Only: No 00:07:54.244 Volatile Memory Backup: OK 00:07:54.244 Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.244 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:54.244 Available Spare: 0% 00:07:54.244 Available Spare Threshold: 0% 00:07:54.244 Life Percentage Used: 0% 00:07:54.244 Data Units Read: 1086 00:07:54.244 Data Units Written: 947 00:07:54.244 Host Read Commands: 61292 00:07:54.244 Host Write Commands: 59969 00:07:54.244 Controller Busy Time: 0 minutes 00:07:54.244 Power Cycles: 0 00:07:54.244 Power On Hours: 0 hours 00:07:54.244 Unsafe Shutdowns: 0 00:07:54.244 Unrecoverable Media Errors: 0 00:07:54.244 Lifetime Error Log Entries: 0 00:07:54.244 Warning Temperature Time: 0 minutes 00:07:54.244 Critical Temperature Time: 0 minutes 00:07:54.244 00:07:54.244 Number of Queues 00:07:54.244 ================ 00:07:54.244 Number of I/O Submission Queues: 64 00:07:54.244 Number of I/O Completion Queues: 64 00:07:54.244 00:07:54.244 ZNS Specific Controller Data 00:07:54.244 ============================ 00:07:54.244 Zone Append Size Limit: 0 00:07:54.244 00:07:54.244 00:07:54.244 Active Namespaces 00:07:54.244 ================= 00:07:54.244 Namespace ID:1 00:07:54.244 Error Recovery Timeout: Unlimited 00:07:54.244 Command Set Identifier: NVM (00h) 00:07:54.244 Deallocate: Supported 00:07:54.244 Deallocated/Unwritten Error: Supported 00:07:54.244 Deallocated Read Value: All 0x00 00:07:54.244 Deallocate in Write Zeroes: Not Supported 00:07:54.244 Deallocated Guard Field: 0xFFFF 00:07:54.244 Flush: Supported 00:07:54.244 Reservation: Not Supported 00:07:54.244 Namespace Sharing Capabilities: Private 00:07:54.244 Size (in LBAs): 1310720 (5GiB) 00:07:54.244 Capacity (in LBAs): 1310720 (5GiB) 00:07:54.244 Utilization (in LBAs): 1310720 (5GiB) 00:07:54.244 Thin Provisioning: Not Supported 00:07:54.244 Per-NS Atomic Units: No 00:07:54.244 Maximum Single Source Range Length: 128 00:07:54.244 Maximum Copy Length: 128 00:07:54.244 Maximum Source Range Count: 128 00:07:54.244 NGUID/EUI64 Never Reused: No 00:07:54.244 Namespace Write Protected: No 00:07:54.244 Number of LBA Formats: 8 00:07:54.244 Current LBA Format: LBA Format #04 00:07:54.244 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.244 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.244 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.244 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.244 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.244 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.244 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.244 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.244 00:07:54.244 NVM Specific Namespace Data 00:07:54.244 =========================== 00:07:54.244 Logical Block Storage Tag Mask: 0 00:07:54.244 Protection Information Capabilities: 00:07:54.244 16b Guard Protection Information Storage Tag Support: No 00:07:54.244 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.244 Storage Tag Check Read Support: No 00:07:54.244 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.244 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.244 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.244 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.244 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.244 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.244 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.244 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.244 02:18:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:54.244 02:18:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:54.502 ===================================================== 00:07:54.502 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.502 ===================================================== 00:07:54.502 Controller Capabilities/Features 00:07:54.502 ================================ 00:07:54.502 Vendor ID: 1b36 00:07:54.502 Subsystem Vendor ID: 1af4 00:07:54.502 Serial Number: 12342 00:07:54.502 Model Number: QEMU NVMe Ctrl 00:07:54.502 Firmware Version: 8.0.0 00:07:54.502 Recommended Arb Burst: 6 00:07:54.502 IEEE OUI Identifier: 00 54 52 00:07:54.502 Multi-path I/O 00:07:54.502 May have multiple subsystem ports: No 00:07:54.502 May have multiple controllers: No 00:07:54.502 Associated with SR-IOV VF: No 00:07:54.503 Max Data Transfer Size: 524288 00:07:54.503 Max Number of Namespaces: 256 00:07:54.503 Max Number of I/O Queues: 64 00:07:54.503 NVMe Specification Version (VS): 1.4 00:07:54.503 NVMe Specification Version (Identify): 1.4 00:07:54.503 Maximum Queue Entries: 2048 00:07:54.503 Contiguous Queues Required: Yes 00:07:54.503 Arbitration Mechanisms Supported 00:07:54.503 Weighted Round Robin: Not Supported 00:07:54.503 Vendor Specific: Not Supported 00:07:54.503 Reset Timeout: 7500 ms 00:07:54.503 Doorbell Stride: 4 bytes 00:07:54.503 NVM Subsystem Reset: Not Supported 00:07:54.503 Command Sets Supported 00:07:54.503 NVM Command Set: Supported 00:07:54.503 Boot Partition: Not Supported 00:07:54.503 Memory Page Size Minimum: 4096 bytes 00:07:54.503 Memory Page Size Maximum: 65536 bytes 00:07:54.503 Persistent Memory Region: Not Supported 00:07:54.503 Optional Asynchronous Events Supported 00:07:54.503 Namespace Attribute Notices: Supported 00:07:54.503 Firmware Activation Notices: Not Supported 00:07:54.503 ANA Change Notices: Not Supported 00:07:54.503 PLE Aggregate Log Change Notices: Not Supported 00:07:54.503 LBA Status Info Alert Notices: Not Supported 00:07:54.503 EGE Aggregate Log Change Notices: Not Supported 00:07:54.503 Normal NVM Subsystem Shutdown event: Not Supported 00:07:54.503 Zone Descriptor Change Notices: Not Supported 00:07:54.503 Discovery Log Change Notices: Not Supported 00:07:54.503 Controller Attributes 00:07:54.503 128-bit Host Identifier: Not Supported 00:07:54.503 Non-Operational Permissive Mode: Not Supported 00:07:54.503 NVM Sets: Not Supported 00:07:54.503 Read Recovery Levels: Not Supported 00:07:54.503 Endurance Groups: Not Supported 00:07:54.503 Predictable Latency Mode: Not Supported 00:07:54.503 Traffic Based Keep ALive: Not Supported 00:07:54.503 Namespace Granularity: Not Supported 00:07:54.503 SQ Associations: Not Supported 00:07:54.503 UUID List: Not Supported 00:07:54.503 Multi-Domain Subsystem: Not Supported 00:07:54.503 Fixed Capacity Management: Not Supported 00:07:54.503 Variable Capacity Management: Not Supported 00:07:54.503 Delete Endurance Group: Not Supported 00:07:54.503 Delete NVM Set: Not Supported 00:07:54.503 Extended LBA Formats Supported: Supported 00:07:54.503 Flexible Data Placement Supported: Not Supported 00:07:54.503 00:07:54.503 Controller Memory Buffer Support 00:07:54.503 ================================ 00:07:54.503 Supported: No 00:07:54.503 00:07:54.503 Persistent Memory Region Support 00:07:54.503 ================================ 00:07:54.503 Supported: No 00:07:54.503 00:07:54.503 Admin Command Set Attributes 00:07:54.503 ============================ 00:07:54.503 Security Send/Receive: Not Supported 00:07:54.503 Format NVM: Supported 00:07:54.503 Firmware Activate/Download: Not Supported 00:07:54.503 Namespace Management: Supported 00:07:54.503 Device Self-Test: Not Supported 00:07:54.503 Directives: Supported 00:07:54.503 NVMe-MI: Not Supported 00:07:54.503 Virtualization Management: Not Supported 00:07:54.503 Doorbell Buffer Config: Supported 00:07:54.503 Get LBA Status Capability: Not Supported 00:07:54.503 Command & Feature Lockdown Capability: Not Supported 00:07:54.503 Abort Command Limit: 4 00:07:54.503 Async Event Request Limit: 4 00:07:54.503 Number of Firmware Slots: N/A 00:07:54.503 Firmware Slot 1 Read-Only: N/A 00:07:54.503 Firmware Activation Without Reset: N/A 00:07:54.503 Multiple Update Detection Support: N/A 00:07:54.503 Firmware Update Granularity: No Information Provided 00:07:54.503 Per-Namespace SMART Log: Yes 00:07:54.503 Asymmetric Namespace Access Log Page: Not Supported 00:07:54.503 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:54.503 Command Effects Log Page: Supported 00:07:54.503 Get Log Page Extended Data: Supported 00:07:54.503 Telemetry Log Pages: Not Supported 00:07:54.503 Persistent Event Log Pages: Not Supported 00:07:54.503 Supported Log Pages Log Page: May Support 00:07:54.503 Commands Supported & Effects Log Page: Not Supported 00:07:54.503 Feature Identifiers & Effects Log Page:May Support 00:07:54.503 NVMe-MI Commands & Effects Log Page: May Support 00:07:54.503 Data Area 4 for Telemetry Log: Not Supported 00:07:54.503 Error Log Page Entries Supported: 1 00:07:54.503 Keep Alive: Not Supported 00:07:54.503 00:07:54.503 NVM Command Set Attributes 00:07:54.503 ========================== 00:07:54.503 Submission Queue Entry Size 00:07:54.503 Max: 64 00:07:54.503 Min: 64 00:07:54.503 Completion Queue Entry Size 00:07:54.503 Max: 16 00:07:54.503 Min: 16 00:07:54.503 Number of Namespaces: 256 00:07:54.503 Compare Command: Supported 00:07:54.503 Write Uncorrectable Command: Not Supported 00:07:54.503 Dataset Management Command: Supported 00:07:54.503 Write Zeroes Command: Supported 00:07:54.503 Set Features Save Field: Supported 00:07:54.503 Reservations: Not Supported 00:07:54.503 Timestamp: Supported 00:07:54.503 Copy: Supported 00:07:54.503 Volatile Write Cache: Present 00:07:54.503 Atomic Write Unit (Normal): 1 00:07:54.503 Atomic Write Unit (PFail): 1 00:07:54.503 Atomic Compare & Write Unit: 1 00:07:54.503 Fused Compare & Write: Not Supported 00:07:54.503 Scatter-Gather List 00:07:54.503 SGL Command Set: Supported 00:07:54.503 SGL Keyed: Not Supported 00:07:54.503 SGL Bit Bucket Descriptor: Not Supported 00:07:54.503 SGL Metadata Pointer: Not Supported 00:07:54.503 Oversized SGL: Not Supported 00:07:54.503 SGL Metadata Address: Not Supported 00:07:54.503 SGL Offset: Not Supported 00:07:54.503 Transport SGL Data Block: Not Supported 00:07:54.503 Replay Protected Memory Block: Not Supported 00:07:54.503 00:07:54.503 Firmware Slot Information 00:07:54.503 ========================= 00:07:54.503 Active slot: 1 00:07:54.503 Slot 1 Firmware Revision: 1.0 00:07:54.503 00:07:54.503 00:07:54.503 Commands Supported and Effects 00:07:54.503 ============================== 00:07:54.503 Admin Commands 00:07:54.503 -------------- 00:07:54.503 Delete I/O Submission Queue (00h): Supported 00:07:54.503 Create I/O Submission Queue (01h): Supported 00:07:54.503 Get Log Page (02h): Supported 00:07:54.503 Delete I/O Completion Queue (04h): Supported 00:07:54.503 Create I/O Completion Queue (05h): Supported 00:07:54.503 Identify (06h): Supported 00:07:54.503 Abort (08h): Supported 00:07:54.503 Set Features (09h): Supported 00:07:54.503 Get Features (0Ah): Supported 00:07:54.503 Asynchronous Event Request (0Ch): Supported 00:07:54.503 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:54.503 Directive Send (19h): Supported 00:07:54.503 Directive Receive (1Ah): Supported 00:07:54.503 Virtualization Management (1Ch): Supported 00:07:54.503 Doorbell Buffer Config (7Ch): Supported 00:07:54.503 Format NVM (80h): Supported LBA-Change 00:07:54.503 I/O Commands 00:07:54.503 ------------ 00:07:54.503 Flush (00h): Supported LBA-Change 00:07:54.503 Write (01h): Supported LBA-Change 00:07:54.503 Read (02h): Supported 00:07:54.503 Compare (05h): Supported 00:07:54.503 Write Zeroes (08h): Supported LBA-Change 00:07:54.503 Dataset Management (09h): Supported LBA-Change 00:07:54.503 Unknown (0Ch): Supported 00:07:54.503 Unknown (12h): Supported 00:07:54.503 Copy (19h): Supported LBA-Change 00:07:54.503 Unknown (1Dh): Supported LBA-Change 00:07:54.503 00:07:54.503 Error Log 00:07:54.503 ========= 00:07:54.503 00:07:54.503 Arbitration 00:07:54.503 =========== 00:07:54.503 Arbitration Burst: no limit 00:07:54.503 00:07:54.503 Power Management 00:07:54.503 ================ 00:07:54.503 Number of Power States: 1 00:07:54.503 Current Power State: Power State #0 00:07:54.503 Power State #0: 00:07:54.503 Max Power: 25.00 W 00:07:54.503 Non-Operational State: Operational 00:07:54.503 Entry Latency: 16 microseconds 00:07:54.503 Exit Latency: 4 microseconds 00:07:54.503 Relative Read Throughput: 0 00:07:54.503 Relative Read Latency: 0 00:07:54.503 Relative Write Throughput: 0 00:07:54.503 Relative Write Latency: 0 00:07:54.503 Idle Power: Not Reported 00:07:54.503 Active Power: Not Reported 00:07:54.503 Non-Operational Permissive Mode: Not Supported 00:07:54.503 00:07:54.503 Health Information 00:07:54.503 ================== 00:07:54.503 Critical Warnings: 00:07:54.503 Available Spare Space: OK 00:07:54.503 Temperature: OK 00:07:54.503 Device Reliability: OK 00:07:54.503 Read Only: No 00:07:54.503 Volatile Memory Backup: OK 00:07:54.503 Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.503 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:54.503 Available Spare: 0% 00:07:54.503 Available Spare Threshold: 0% 00:07:54.503 Life Percentage Used: 0% 00:07:54.503 Data Units Read: 2416 00:07:54.503 Data Units Written: 2203 00:07:54.504 Host Read Commands: 127652 00:07:54.504 Host Write Commands: 125921 00:07:54.504 Controller Busy Time: 0 minutes 00:07:54.504 Power Cycles: 0 00:07:54.504 Power On Hours: 0 hours 00:07:54.504 Unsafe Shutdowns: 0 00:07:54.504 Unrecoverable Media Errors: 0 00:07:54.504 Lifetime Error Log Entries: 0 00:07:54.504 Warning Temperature Time: 0 minutes 00:07:54.504 Critical Temperature Time: 0 minutes 00:07:54.504 00:07:54.504 Number of Queues 00:07:54.504 ================ 00:07:54.504 Number of I/O Submission Queues: 64 00:07:54.504 Number of I/O Completion Queues: 64 00:07:54.504 00:07:54.504 ZNS Specific Controller Data 00:07:54.504 ============================ 00:07:54.504 Zone Append Size Limit: 0 00:07:54.504 00:07:54.504 00:07:54.504 Active Namespaces 00:07:54.504 ================= 00:07:54.504 Namespace ID:1 00:07:54.504 Error Recovery Timeout: Unlimited 00:07:54.504 Command Set Identifier: NVM (00h) 00:07:54.504 Deallocate: Supported 00:07:54.504 Deallocated/Unwritten Error: Supported 00:07:54.504 Deallocated Read Value: All 0x00 00:07:54.504 Deallocate in Write Zeroes: Not Supported 00:07:54.504 Deallocated Guard Field: 0xFFFF 00:07:54.504 Flush: Supported 00:07:54.504 Reservation: Not Supported 00:07:54.504 Namespace Sharing Capabilities: Private 00:07:54.504 Size (in LBAs): 1048576 (4GiB) 00:07:54.504 Capacity (in LBAs): 1048576 (4GiB) 00:07:54.504 Utilization (in LBAs): 1048576 (4GiB) 00:07:54.504 Thin Provisioning: Not Supported 00:07:54.504 Per-NS Atomic Units: No 00:07:54.504 Maximum Single Source Range Length: 128 00:07:54.504 Maximum Copy Length: 128 00:07:54.504 Maximum Source Range Count: 128 00:07:54.504 NGUID/EUI64 Never Reused: No 00:07:54.504 Namespace Write Protected: No 00:07:54.504 Number of LBA Formats: 8 00:07:54.504 Current LBA Format: LBA Format #04 00:07:54.504 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.504 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.504 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.504 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.504 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.504 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.504 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.504 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.504 00:07:54.504 NVM Specific Namespace Data 00:07:54.504 =========================== 00:07:54.504 Logical Block Storage Tag Mask: 0 00:07:54.504 Protection Information Capabilities: 00:07:54.504 16b Guard Protection Information Storage Tag Support: No 00:07:54.504 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.504 Storage Tag Check Read Support: No 00:07:54.504 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Namespace ID:2 00:07:54.504 Error Recovery Timeout: Unlimited 00:07:54.504 Command Set Identifier: NVM (00h) 00:07:54.504 Deallocate: Supported 00:07:54.504 Deallocated/Unwritten Error: Supported 00:07:54.504 Deallocated Read Value: All 0x00 00:07:54.504 Deallocate in Write Zeroes: Not Supported 00:07:54.504 Deallocated Guard Field: 0xFFFF 00:07:54.504 Flush: Supported 00:07:54.504 Reservation: Not Supported 00:07:54.504 Namespace Sharing Capabilities: Private 00:07:54.504 Size (in LBAs): 1048576 (4GiB) 00:07:54.504 Capacity (in LBAs): 1048576 (4GiB) 00:07:54.504 Utilization (in LBAs): 1048576 (4GiB) 00:07:54.504 Thin Provisioning: Not Supported 00:07:54.504 Per-NS Atomic Units: No 00:07:54.504 Maximum Single Source Range Length: 128 00:07:54.504 Maximum Copy Length: 128 00:07:54.504 Maximum Source Range Count: 128 00:07:54.504 NGUID/EUI64 Never Reused: No 00:07:54.504 Namespace Write Protected: No 00:07:54.504 Number of LBA Formats: 8 00:07:54.504 Current LBA Format: LBA Format #04 00:07:54.504 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.504 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.504 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.504 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.504 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.504 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.504 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.504 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.504 00:07:54.504 NVM Specific Namespace Data 00:07:54.504 =========================== 00:07:54.504 Logical Block Storage Tag Mask: 0 00:07:54.504 Protection Information Capabilities: 00:07:54.504 16b Guard Protection Information Storage Tag Support: No 00:07:54.504 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.504 Storage Tag Check Read Support: No 00:07:54.504 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Namespace ID:3 00:07:54.504 Error Recovery Timeout: Unlimited 00:07:54.504 Command Set Identifier: NVM (00h) 00:07:54.504 Deallocate: Supported 00:07:54.504 Deallocated/Unwritten Error: Supported 00:07:54.504 Deallocated Read Value: All 0x00 00:07:54.504 Deallocate in Write Zeroes: Not Supported 00:07:54.504 Deallocated Guard Field: 0xFFFF 00:07:54.504 Flush: Supported 00:07:54.504 Reservation: Not Supported 00:07:54.504 Namespace Sharing Capabilities: Private 00:07:54.504 Size (in LBAs): 1048576 (4GiB) 00:07:54.504 Capacity (in LBAs): 1048576 (4GiB) 00:07:54.504 Utilization (in LBAs): 1048576 (4GiB) 00:07:54.504 Thin Provisioning: Not Supported 00:07:54.504 Per-NS Atomic Units: No 00:07:54.504 Maximum Single Source Range Length: 128 00:07:54.504 Maximum Copy Length: 128 00:07:54.504 Maximum Source Range Count: 128 00:07:54.504 NGUID/EUI64 Never Reused: No 00:07:54.504 Namespace Write Protected: No 00:07:54.504 Number of LBA Formats: 8 00:07:54.504 Current LBA Format: LBA Format #04 00:07:54.504 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.504 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.504 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.504 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.504 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.504 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.504 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.504 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.504 00:07:54.504 NVM Specific Namespace Data 00:07:54.504 =========================== 00:07:54.504 Logical Block Storage Tag Mask: 0 00:07:54.504 Protection Information Capabilities: 00:07:54.504 16b Guard Protection Information Storage Tag Support: No 00:07:54.504 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.504 Storage Tag Check Read Support: No 00:07:54.504 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.504 02:18:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:54.504 02:18:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:54.763 ===================================================== 00:07:54.763 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.763 ===================================================== 00:07:54.763 Controller Capabilities/Features 00:07:54.763 ================================ 00:07:54.763 Vendor ID: 1b36 00:07:54.763 Subsystem Vendor ID: 1af4 00:07:54.763 Serial Number: 12343 00:07:54.763 Model Number: QEMU NVMe Ctrl 00:07:54.763 Firmware Version: 8.0.0 00:07:54.763 Recommended Arb Burst: 6 00:07:54.763 IEEE OUI Identifier: 00 54 52 00:07:54.763 Multi-path I/O 00:07:54.763 May have multiple subsystem ports: No 00:07:54.763 May have multiple controllers: Yes 00:07:54.763 Associated with SR-IOV VF: No 00:07:54.763 Max Data Transfer Size: 524288 00:07:54.763 Max Number of Namespaces: 256 00:07:54.763 Max Number of I/O Queues: 64 00:07:54.763 NVMe Specification Version (VS): 1.4 00:07:54.763 NVMe Specification Version (Identify): 1.4 00:07:54.763 Maximum Queue Entries: 2048 00:07:54.763 Contiguous Queues Required: Yes 00:07:54.763 Arbitration Mechanisms Supported 00:07:54.763 Weighted Round Robin: Not Supported 00:07:54.763 Vendor Specific: Not Supported 00:07:54.763 Reset Timeout: 7500 ms 00:07:54.763 Doorbell Stride: 4 bytes 00:07:54.763 NVM Subsystem Reset: Not Supported 00:07:54.763 Command Sets Supported 00:07:54.763 NVM Command Set: Supported 00:07:54.763 Boot Partition: Not Supported 00:07:54.763 Memory Page Size Minimum: 4096 bytes 00:07:54.763 Memory Page Size Maximum: 65536 bytes 00:07:54.763 Persistent Memory Region: Not Supported 00:07:54.763 Optional Asynchronous Events Supported 00:07:54.763 Namespace Attribute Notices: Supported 00:07:54.763 Firmware Activation Notices: Not Supported 00:07:54.763 ANA Change Notices: Not Supported 00:07:54.763 PLE Aggregate Log Change Notices: Not Supported 00:07:54.763 LBA Status Info Alert Notices: Not Supported 00:07:54.763 EGE Aggregate Log Change Notices: Not Supported 00:07:54.763 Normal NVM Subsystem Shutdown event: Not Supported 00:07:54.763 Zone Descriptor Change Notices: Not Supported 00:07:54.763 Discovery Log Change Notices: Not Supported 00:07:54.763 Controller Attributes 00:07:54.763 128-bit Host Identifier: Not Supported 00:07:54.763 Non-Operational Permissive Mode: Not Supported 00:07:54.763 NVM Sets: Not Supported 00:07:54.763 Read Recovery Levels: Not Supported 00:07:54.763 Endurance Groups: Supported 00:07:54.763 Predictable Latency Mode: Not Supported 00:07:54.763 Traffic Based Keep ALive: Not Supported 00:07:54.763 Namespace Granularity: Not Supported 00:07:54.763 SQ Associations: Not Supported 00:07:54.763 UUID List: Not Supported 00:07:54.763 Multi-Domain Subsystem: Not Supported 00:07:54.763 Fixed Capacity Management: Not Supported 00:07:54.763 Variable Capacity Management: Not Supported 00:07:54.763 Delete Endurance Group: Not Supported 00:07:54.763 Delete NVM Set: Not Supported 00:07:54.763 Extended LBA Formats Supported: Supported 00:07:54.763 Flexible Data Placement Supported: Supported 00:07:54.763 00:07:54.763 Controller Memory Buffer Support 00:07:54.763 ================================ 00:07:54.764 Supported: No 00:07:54.764 00:07:54.764 Persistent Memory Region Support 00:07:54.764 ================================ 00:07:54.764 Supported: No 00:07:54.764 00:07:54.764 Admin Command Set Attributes 00:07:54.764 ============================ 00:07:54.764 Security Send/Receive: Not Supported 00:07:54.764 Format NVM: Supported 00:07:54.764 Firmware Activate/Download: Not Supported 00:07:54.764 Namespace Management: Supported 00:07:54.764 Device Self-Test: Not Supported 00:07:54.764 Directives: Supported 00:07:54.764 NVMe-MI: Not Supported 00:07:54.764 Virtualization Management: Not Supported 00:07:54.764 Doorbell Buffer Config: Supported 00:07:54.764 Get LBA Status Capability: Not Supported 00:07:54.764 Command & Feature Lockdown Capability: Not Supported 00:07:54.764 Abort Command Limit: 4 00:07:54.764 Async Event Request Limit: 4 00:07:54.764 Number of Firmware Slots: N/A 00:07:54.764 Firmware Slot 1 Read-Only: N/A 00:07:54.764 Firmware Activation Without Reset: N/A 00:07:54.764 Multiple Update Detection Support: N/A 00:07:54.764 Firmware Update Granularity: No Information Provided 00:07:54.764 Per-Namespace SMART Log: Yes 00:07:54.764 Asymmetric Namespace Access Log Page: Not Supported 00:07:54.764 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:54.764 Command Effects Log Page: Supported 00:07:54.764 Get Log Page Extended Data: Supported 00:07:54.764 Telemetry Log Pages: Not Supported 00:07:54.764 Persistent Event Log Pages: Not Supported 00:07:54.764 Supported Log Pages Log Page: May Support 00:07:54.764 Commands Supported & Effects Log Page: Not Supported 00:07:54.764 Feature Identifiers & Effects Log Page:May Support 00:07:54.764 NVMe-MI Commands & Effects Log Page: May Support 00:07:54.764 Data Area 4 for Telemetry Log: Not Supported 00:07:54.764 Error Log Page Entries Supported: 1 00:07:54.764 Keep Alive: Not Supported 00:07:54.764 00:07:54.764 NVM Command Set Attributes 00:07:54.764 ========================== 00:07:54.764 Submission Queue Entry Size 00:07:54.764 Max: 64 00:07:54.764 Min: 64 00:07:54.764 Completion Queue Entry Size 00:07:54.764 Max: 16 00:07:54.764 Min: 16 00:07:54.764 Number of Namespaces: 256 00:07:54.764 Compare Command: Supported 00:07:54.764 Write Uncorrectable Command: Not Supported 00:07:54.764 Dataset Management Command: Supported 00:07:54.764 Write Zeroes Command: Supported 00:07:54.764 Set Features Save Field: Supported 00:07:54.764 Reservations: Not Supported 00:07:54.764 Timestamp: Supported 00:07:54.764 Copy: Supported 00:07:54.764 Volatile Write Cache: Present 00:07:54.764 Atomic Write Unit (Normal): 1 00:07:54.764 Atomic Write Unit (PFail): 1 00:07:54.764 Atomic Compare & Write Unit: 1 00:07:54.764 Fused Compare & Write: Not Supported 00:07:54.764 Scatter-Gather List 00:07:54.764 SGL Command Set: Supported 00:07:54.764 SGL Keyed: Not Supported 00:07:54.764 SGL Bit Bucket Descriptor: Not Supported 00:07:54.764 SGL Metadata Pointer: Not Supported 00:07:54.764 Oversized SGL: Not Supported 00:07:54.764 SGL Metadata Address: Not Supported 00:07:54.764 SGL Offset: Not Supported 00:07:54.764 Transport SGL Data Block: Not Supported 00:07:54.764 Replay Protected Memory Block: Not Supported 00:07:54.764 00:07:54.764 Firmware Slot Information 00:07:54.764 ========================= 00:07:54.764 Active slot: 1 00:07:54.764 Slot 1 Firmware Revision: 1.0 00:07:54.764 00:07:54.764 00:07:54.764 Commands Supported and Effects 00:07:54.764 ============================== 00:07:54.764 Admin Commands 00:07:54.764 -------------- 00:07:54.764 Delete I/O Submission Queue (00h): Supported 00:07:54.764 Create I/O Submission Queue (01h): Supported 00:07:54.764 Get Log Page (02h): Supported 00:07:54.764 Delete I/O Completion Queue (04h): Supported 00:07:54.764 Create I/O Completion Queue (05h): Supported 00:07:54.764 Identify (06h): Supported 00:07:54.764 Abort (08h): Supported 00:07:54.764 Set Features (09h): Supported 00:07:54.764 Get Features (0Ah): Supported 00:07:54.764 Asynchronous Event Request (0Ch): Supported 00:07:54.764 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:54.764 Directive Send (19h): Supported 00:07:54.764 Directive Receive (1Ah): Supported 00:07:54.764 Virtualization Management (1Ch): Supported 00:07:54.764 Doorbell Buffer Config (7Ch): Supported 00:07:54.764 Format NVM (80h): Supported LBA-Change 00:07:54.764 I/O Commands 00:07:54.764 ------------ 00:07:54.764 Flush (00h): Supported LBA-Change 00:07:54.764 Write (01h): Supported LBA-Change 00:07:54.764 Read (02h): Supported 00:07:54.764 Compare (05h): Supported 00:07:54.764 Write Zeroes (08h): Supported LBA-Change 00:07:54.764 Dataset Management (09h): Supported LBA-Change 00:07:54.764 Unknown (0Ch): Supported 00:07:54.764 Unknown (12h): Supported 00:07:54.764 Copy (19h): Supported LBA-Change 00:07:54.764 Unknown (1Dh): Supported LBA-Change 00:07:54.764 00:07:54.764 Error Log 00:07:54.764 ========= 00:07:54.764 00:07:54.764 Arbitration 00:07:54.764 =========== 00:07:54.764 Arbitration Burst: no limit 00:07:54.764 00:07:54.764 Power Management 00:07:54.764 ================ 00:07:54.764 Number of Power States: 1 00:07:54.764 Current Power State: Power State #0 00:07:54.764 Power State #0: 00:07:54.764 Max Power: 25.00 W 00:07:54.764 Non-Operational State: Operational 00:07:54.764 Entry Latency: 16 microseconds 00:07:54.764 Exit Latency: 4 microseconds 00:07:54.764 Relative Read Throughput: 0 00:07:54.764 Relative Read Latency: 0 00:07:54.764 Relative Write Throughput: 0 00:07:54.764 Relative Write Latency: 0 00:07:54.764 Idle Power: Not Reported 00:07:54.764 Active Power: Not Reported 00:07:54.764 Non-Operational Permissive Mode: Not Supported 00:07:54.764 00:07:54.764 Health Information 00:07:54.764 ================== 00:07:54.764 Critical Warnings: 00:07:54.764 Available Spare Space: OK 00:07:54.764 Temperature: OK 00:07:54.764 Device Reliability: OK 00:07:54.764 Read Only: No 00:07:54.764 Volatile Memory Backup: OK 00:07:54.764 Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.764 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:54.764 Available Spare: 0% 00:07:54.764 Available Spare Threshold: 0% 00:07:54.764 Life Percentage Used: 0% 00:07:54.764 Data Units Read: 1051 00:07:54.764 Data Units Written: 980 00:07:54.764 Host Read Commands: 44471 00:07:54.764 Host Write Commands: 43894 00:07:54.764 Controller Busy Time: 0 minutes 00:07:54.764 Power Cycles: 0 00:07:54.764 Power On Hours: 0 hours 00:07:54.764 Unsafe Shutdowns: 0 00:07:54.764 Unrecoverable Media Errors: 0 00:07:54.764 Lifetime Error Log Entries: 0 00:07:54.764 Warning Temperature Time: 0 minutes 00:07:54.764 Critical Temperature Time: 0 minutes 00:07:54.764 00:07:54.764 Number of Queues 00:07:54.764 ================ 00:07:54.764 Number of I/O Submission Queues: 64 00:07:54.764 Number of I/O Completion Queues: 64 00:07:54.764 00:07:54.764 ZNS Specific Controller Data 00:07:54.764 ============================ 00:07:54.764 Zone Append Size Limit: 0 00:07:54.764 00:07:54.764 00:07:54.764 Active Namespaces 00:07:54.764 ================= 00:07:54.764 Namespace ID:1 00:07:54.764 Error Recovery Timeout: Unlimited 00:07:54.764 Command Set Identifier: NVM (00h) 00:07:54.764 Deallocate: Supported 00:07:54.764 Deallocated/Unwritten Error: Supported 00:07:54.764 Deallocated Read Value: All 0x00 00:07:54.764 Deallocate in Write Zeroes: Not Supported 00:07:54.764 Deallocated Guard Field: 0xFFFF 00:07:54.764 Flush: Supported 00:07:54.764 Reservation: Not Supported 00:07:54.764 Namespace Sharing Capabilities: Multiple Controllers 00:07:54.764 Size (in LBAs): 262144 (1GiB) 00:07:54.764 Capacity (in LBAs): 262144 (1GiB) 00:07:54.764 Utilization (in LBAs): 262144 (1GiB) 00:07:54.764 Thin Provisioning: Not Supported 00:07:54.764 Per-NS Atomic Units: No 00:07:54.764 Maximum Single Source Range Length: 128 00:07:54.764 Maximum Copy Length: 128 00:07:54.764 Maximum Source Range Count: 128 00:07:54.764 NGUID/EUI64 Never Reused: No 00:07:54.764 Namespace Write Protected: No 00:07:54.764 Endurance group ID: 1 00:07:54.764 Number of LBA Formats: 8 00:07:54.764 Current LBA Format: LBA Format #04 00:07:54.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.764 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.764 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.764 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.764 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.764 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.764 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.764 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.764 00:07:54.764 Get Feature FDP: 00:07:54.765 ================ 00:07:54.765 Enabled: Yes 00:07:54.765 FDP configuration index: 0 00:07:54.765 00:07:54.765 FDP configurations log page 00:07:54.765 =========================== 00:07:54.765 Number of FDP configurations: 1 00:07:54.765 Version: 0 00:07:54.765 Size: 112 00:07:54.765 FDP Configuration Descriptor: 0 00:07:54.765 Descriptor Size: 96 00:07:54.765 Reclaim Group Identifier format: 2 00:07:54.765 FDP Volatile Write Cache: Not Present 00:07:54.765 FDP Configuration: Valid 00:07:54.765 Vendor Specific Size: 0 00:07:54.765 Number of Reclaim Groups: 2 00:07:54.765 Number of Recalim Unit Handles: 8 00:07:54.765 Max Placement Identifiers: 128 00:07:54.765 Number of Namespaces Suppprted: 256 00:07:54.765 Reclaim unit Nominal Size: 6000000 bytes 00:07:54.765 Estimated Reclaim Unit Time Limit: Not Reported 00:07:54.765 RUH Desc #000: RUH Type: Initially Isolated 00:07:54.765 RUH Desc #001: RUH Type: Initially Isolated 00:07:54.765 RUH Desc #002: RUH Type: Initially Isolated 00:07:54.765 RUH Desc #003: RUH Type: Initially Isolated 00:07:54.765 RUH Desc #004: RUH Type: Initially Isolated 00:07:54.765 RUH Desc #005: RUH Type: Initially Isolated 00:07:54.765 RUH Desc #006: RUH Type: Initially Isolated 00:07:54.765 RUH Desc #007: RUH Type: Initially Isolated 00:07:54.765 00:07:54.765 FDP reclaim unit handle usage log page 00:07:54.765 ====================================== 00:07:54.765 Number of Reclaim Unit Handles: 8 00:07:54.765 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:54.765 RUH Usage Desc #001: RUH Attributes: Unused 00:07:54.765 RUH Usage Desc #002: RUH Attributes: Unused 00:07:54.765 RUH Usage Desc #003: RUH Attributes: Unused 00:07:54.765 RUH Usage Desc #004: RUH Attributes: Unused 00:07:54.765 RUH Usage Desc #005: RUH Attributes: Unused 00:07:54.765 RUH Usage Desc #006: RUH Attributes: Unused 00:07:54.765 RUH Usage Desc #007: RUH Attributes: Unused 00:07:54.765 00:07:54.765 FDP statistics log page 00:07:54.765 ======================= 00:07:54.765 Host bytes with metadata written: 611033088 00:07:54.765 Media bytes with metadata written: 614391808 00:07:54.765 Media bytes erased: 0 00:07:54.765 00:07:54.765 FDP events log page 00:07:54.765 =================== 00:07:54.765 Number of FDP events: 0 00:07:54.765 00:07:54.765 NVM Specific Namespace Data 00:07:54.765 =========================== 00:07:54.765 Logical Block Storage Tag Mask: 0 00:07:54.765 Protection Information Capabilities: 00:07:54.765 16b Guard Protection Information Storage Tag Support: No 00:07:54.765 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.765 Storage Tag Check Read Support: No 00:07:54.765 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.765 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.765 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.765 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.765 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.765 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.765 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.765 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.765 00:07:54.765 real 0m1.200s 00:07:54.765 user 0m0.434s 00:07:54.765 sys 0m0.546s 00:07:54.765 02:18:41 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.765 02:18:41 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:54.765 ************************************ 00:07:54.765 END TEST nvme_identify 00:07:54.765 ************************************ 00:07:54.765 02:18:41 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:54.765 02:18:41 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:54.765 02:18:41 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.765 02:18:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.765 ************************************ 00:07:54.765 START TEST nvme_perf 00:07:54.765 ************************************ 00:07:54.765 02:18:41 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:07:54.765 02:18:41 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:56.139 Initializing NVMe Controllers 00:07:56.139 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:56.139 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:56.139 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:56.139 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:56.139 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:56.139 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:56.139 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:56.139 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:56.139 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:56.139 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:56.139 Initialization complete. Launching workers. 00:07:56.139 ======================================================== 00:07:56.139 Latency(us) 00:07:56.139 Device Information : IOPS MiB/s Average min max 00:07:56.139 PCIE (0000:00:10.0) NSID 1 from core 0: 16840.86 197.35 7620.73 5411.77 29338.02 00:07:56.139 PCIE (0000:00:11.0) NSID 1 from core 0: 16840.86 197.35 7612.90 5450.54 27752.04 00:07:56.139 PCIE (0000:00:13.0) NSID 1 from core 0: 16840.86 197.35 7603.42 5463.62 26609.63 00:07:56.139 PCIE (0000:00:12.0) NSID 1 from core 0: 16840.86 197.35 7593.94 5493.58 24952.80 00:07:56.139 PCIE (0000:00:12.0) NSID 2 from core 0: 16840.86 197.35 7583.44 5467.42 23334.45 00:07:56.139 PCIE (0000:00:12.0) NSID 3 from core 0: 16840.86 197.35 7571.71 5492.43 21707.26 00:07:56.139 ======================================================== 00:07:56.139 Total : 101045.18 1184.12 7597.69 5411.77 29338.02 00:07:56.139 00:07:56.139 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:56.139 ================================================================================= 00:07:56.139 1.00000% : 5721.797us 00:07:56.139 10.00000% : 6377.157us 00:07:56.139 25.00000% : 6906.486us 00:07:56.139 50.00000% : 7360.197us 00:07:56.140 75.00000% : 7864.320us 00:07:56.140 90.00000% : 8620.505us 00:07:56.140 95.00000% : 9981.637us 00:07:56.140 98.00000% : 11796.480us 00:07:56.140 99.00000% : 12552.665us 00:07:56.140 99.50000% : 22786.363us 00:07:56.140 99.90000% : 29037.489us 00:07:56.140 99.99000% : 29440.788us 00:07:56.140 99.99900% : 29440.788us 00:07:56.140 99.99990% : 29440.788us 00:07:56.140 99.99999% : 29440.788us 00:07:56.140 00:07:56.140 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:56.140 ================================================================================= 00:07:56.140 1.00000% : 5772.209us 00:07:56.140 10.00000% : 6377.157us 00:07:56.140 25.00000% : 6906.486us 00:07:56.140 50.00000% : 7360.197us 00:07:56.140 75.00000% : 7864.320us 00:07:56.140 90.00000% : 8570.092us 00:07:56.140 95.00000% : 10082.462us 00:07:56.140 98.00000% : 11947.717us 00:07:56.140 99.00000% : 12552.665us 00:07:56.140 99.50000% : 21677.292us 00:07:56.140 99.90000% : 27424.295us 00:07:56.140 99.99000% : 27827.594us 00:07:56.140 99.99900% : 27827.594us 00:07:56.140 99.99990% : 27827.594us 00:07:56.140 99.99999% : 27827.594us 00:07:56.140 00:07:56.140 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:56.140 ================================================================================= 00:07:56.140 1.00000% : 5772.209us 00:07:56.140 10.00000% : 6377.157us 00:07:56.140 25.00000% : 6906.486us 00:07:56.140 50.00000% : 7360.197us 00:07:56.140 75.00000% : 7864.320us 00:07:56.140 90.00000% : 8570.092us 00:07:56.140 95.00000% : 10082.462us 00:07:56.140 98.00000% : 11947.717us 00:07:56.140 99.00000% : 12754.314us 00:07:56.140 99.50000% : 20568.222us 00:07:56.140 99.90000% : 26214.400us 00:07:56.140 99.99000% : 26617.698us 00:07:56.140 99.99900% : 26617.698us 00:07:56.140 99.99990% : 26617.698us 00:07:56.140 99.99999% : 26617.698us 00:07:56.140 00:07:56.140 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:56.140 ================================================================================= 00:07:56.140 1.00000% : 5772.209us 00:07:56.140 10.00000% : 6377.157us 00:07:56.140 25.00000% : 6906.486us 00:07:56.140 50.00000% : 7360.197us 00:07:56.140 75.00000% : 7864.320us 00:07:56.140 90.00000% : 8570.092us 00:07:56.140 95.00000% : 10032.049us 00:07:56.140 98.00000% : 11998.129us 00:07:56.140 99.00000% : 12703.902us 00:07:56.140 99.50000% : 18955.028us 00:07:56.140 99.90000% : 24601.206us 00:07:56.140 99.99000% : 25004.505us 00:07:56.140 99.99900% : 25004.505us 00:07:56.140 99.99990% : 25004.505us 00:07:56.140 99.99999% : 25004.505us 00:07:56.140 00:07:56.140 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:56.140 ================================================================================= 00:07:56.140 1.00000% : 5772.209us 00:07:56.140 10.00000% : 6377.157us 00:07:56.140 25.00000% : 6906.486us 00:07:56.140 50.00000% : 7360.197us 00:07:56.140 75.00000% : 7864.320us 00:07:56.140 90.00000% : 8620.505us 00:07:56.140 95.00000% : 10082.462us 00:07:56.140 98.00000% : 11947.717us 00:07:56.140 99.00000% : 12603.077us 00:07:56.140 99.50000% : 17341.834us 00:07:56.140 99.90000% : 22988.012us 00:07:56.140 99.99000% : 23391.311us 00:07:56.140 99.99900% : 23391.311us 00:07:56.140 99.99990% : 23391.311us 00:07:56.140 99.99999% : 23391.311us 00:07:56.140 00:07:56.140 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:56.140 ================================================================================= 00:07:56.140 1.00000% : 5772.209us 00:07:56.140 10.00000% : 6402.363us 00:07:56.140 25.00000% : 6906.486us 00:07:56.140 50.00000% : 7360.197us 00:07:56.140 75.00000% : 7864.320us 00:07:56.140 90.00000% : 8620.505us 00:07:56.140 95.00000% : 10132.874us 00:07:56.140 98.00000% : 11796.480us 00:07:56.140 99.00000% : 12451.840us 00:07:56.140 99.50000% : 15728.640us 00:07:56.140 99.90000% : 21374.818us 00:07:56.140 99.99000% : 21778.117us 00:07:56.140 99.99900% : 21778.117us 00:07:56.140 99.99990% : 21778.117us 00:07:56.140 99.99999% : 21778.117us 00:07:56.140 00:07:56.140 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:56.140 ============================================================================== 00:07:56.140 Range in us Cumulative IO count 00:07:56.140 5394.117 - 5419.323: 0.0059% ( 1) 00:07:56.140 5419.323 - 5444.529: 0.0473% ( 7) 00:07:56.140 5444.529 - 5469.735: 0.1302% ( 14) 00:07:56.140 5469.735 - 5494.942: 0.1657% ( 6) 00:07:56.140 5494.942 - 5520.148: 0.2131% ( 8) 00:07:56.140 5520.148 - 5545.354: 0.2841% ( 12) 00:07:56.140 5545.354 - 5570.560: 0.3255% ( 7) 00:07:56.140 5570.560 - 5595.766: 0.3788% ( 9) 00:07:56.140 5595.766 - 5620.972: 0.4557% ( 13) 00:07:56.140 5620.972 - 5646.178: 0.6629% ( 35) 00:07:56.140 5646.178 - 5671.385: 0.7398% ( 13) 00:07:56.140 5671.385 - 5696.591: 0.8819% ( 24) 00:07:56.140 5696.591 - 5721.797: 1.0476% ( 28) 00:07:56.140 5721.797 - 5747.003: 1.2725% ( 38) 00:07:56.140 5747.003 - 5772.209: 1.4500% ( 30) 00:07:56.140 5772.209 - 5797.415: 1.6394% ( 32) 00:07:56.140 5797.415 - 5822.622: 1.8466% ( 35) 00:07:56.140 5822.622 - 5847.828: 2.0833% ( 40) 00:07:56.140 5847.828 - 5873.034: 2.3733% ( 49) 00:07:56.140 5873.034 - 5898.240: 2.5864% ( 36) 00:07:56.140 5898.240 - 5923.446: 2.9060% ( 54) 00:07:56.140 5923.446 - 5948.652: 3.1664% ( 44) 00:07:56.140 5948.652 - 5973.858: 3.4624% ( 50) 00:07:56.140 5973.858 - 5999.065: 3.7701% ( 52) 00:07:56.140 5999.065 - 6024.271: 4.0483% ( 47) 00:07:56.140 6024.271 - 6049.477: 4.3975% ( 59) 00:07:56.140 6049.477 - 6074.683: 4.6934% ( 50) 00:07:56.140 6074.683 - 6099.889: 5.0545% ( 61) 00:07:56.140 6099.889 - 6125.095: 5.3800% ( 55) 00:07:56.140 6125.095 - 6150.302: 5.6996% ( 54) 00:07:56.140 6150.302 - 6175.508: 6.1790% ( 81) 00:07:56.140 6175.508 - 6200.714: 6.7235% ( 92) 00:07:56.140 6200.714 - 6225.920: 7.1378% ( 70) 00:07:56.140 6225.920 - 6251.126: 7.6409% ( 85) 00:07:56.140 6251.126 - 6276.332: 8.1203% ( 81) 00:07:56.140 6276.332 - 6301.538: 8.6766% ( 94) 00:07:56.140 6301.538 - 6326.745: 9.2507% ( 97) 00:07:56.140 6326.745 - 6351.951: 9.8130% ( 95) 00:07:56.140 6351.951 - 6377.157: 10.4226% ( 103) 00:07:56.140 6377.157 - 6402.363: 11.1032% ( 115) 00:07:56.140 6402.363 - 6427.569: 11.6892% ( 99) 00:07:56.140 6427.569 - 6452.775: 12.3165% ( 106) 00:07:56.140 6452.775 - 6503.188: 13.6541% ( 226) 00:07:56.140 6503.188 - 6553.600: 15.0450% ( 235) 00:07:56.140 6553.600 - 6604.012: 16.5187% ( 249) 00:07:56.140 6604.012 - 6654.425: 17.9628% ( 244) 00:07:56.140 6654.425 - 6704.837: 19.6674% ( 288) 00:07:56.140 6704.837 - 6755.249: 21.2417% ( 266) 00:07:56.140 6755.249 - 6805.662: 22.9463% ( 288) 00:07:56.140 6805.662 - 6856.074: 24.8580% ( 323) 00:07:56.140 6856.074 - 6906.486: 27.0064% ( 363) 00:07:56.140 6906.486 - 6956.898: 29.2495% ( 379) 00:07:56.140 6956.898 - 7007.311: 31.7116% ( 416) 00:07:56.140 7007.311 - 7057.723: 34.2921% ( 436) 00:07:56.140 7057.723 - 7108.135: 37.1330% ( 480) 00:07:56.140 7108.135 - 7158.548: 39.8733% ( 463) 00:07:56.140 7158.548 - 7208.960: 42.7616% ( 488) 00:07:56.140 7208.960 - 7259.372: 45.6380% ( 486) 00:07:56.140 7259.372 - 7309.785: 48.4848% ( 481) 00:07:56.140 7309.785 - 7360.197: 51.3494% ( 484) 00:07:56.140 7360.197 - 7410.609: 54.1016% ( 465) 00:07:56.140 7410.609 - 7461.022: 56.8063% ( 457) 00:07:56.140 7461.022 - 7511.434: 59.4519% ( 447) 00:07:56.140 7511.434 - 7561.846: 61.9555% ( 423) 00:07:56.140 7561.846 - 7612.258: 64.4413% ( 420) 00:07:56.140 7612.258 - 7662.671: 66.8146% ( 401) 00:07:56.140 7662.671 - 7713.083: 69.1051% ( 387) 00:07:56.140 7713.083 - 7763.495: 71.3187% ( 374) 00:07:56.140 7763.495 - 7813.908: 73.4671% ( 363) 00:07:56.140 7813.908 - 7864.320: 75.5090% ( 345) 00:07:56.140 7864.320 - 7914.732: 77.2550% ( 295) 00:07:56.140 7914.732 - 7965.145: 78.8471% ( 269) 00:07:56.140 7965.145 - 8015.557: 80.4036% ( 263) 00:07:56.140 8015.557 - 8065.969: 81.6643% ( 213) 00:07:56.140 8065.969 - 8116.382: 82.7711% ( 187) 00:07:56.140 8116.382 - 8166.794: 83.8542% ( 183) 00:07:56.140 8166.794 - 8217.206: 84.7775% ( 156) 00:07:56.140 8217.206 - 8267.618: 85.7363% ( 162) 00:07:56.140 8267.618 - 8318.031: 86.5175% ( 132) 00:07:56.140 8318.031 - 8368.443: 87.2692% ( 127) 00:07:56.140 8368.443 - 8418.855: 88.0800% ( 137) 00:07:56.140 8418.855 - 8469.268: 88.7074% ( 106) 00:07:56.140 8469.268 - 8519.680: 89.2933% ( 99) 00:07:56.140 8519.680 - 8570.092: 89.7550% ( 78) 00:07:56.140 8570.092 - 8620.505: 90.1811% ( 72) 00:07:56.140 8620.505 - 8670.917: 90.5777% ( 67) 00:07:56.140 8670.917 - 8721.329: 90.9268% ( 59) 00:07:56.140 8721.329 - 8771.742: 91.2583% ( 56) 00:07:56.140 8771.742 - 8822.154: 91.5305% ( 46) 00:07:56.140 8822.154 - 8872.566: 91.7850% ( 43) 00:07:56.140 8872.566 - 8922.978: 92.0159% ( 39) 00:07:56.140 8922.978 - 8973.391: 92.2467% ( 39) 00:07:56.140 8973.391 - 9023.803: 92.4065% ( 27) 00:07:56.140 9023.803 - 9074.215: 92.5959% ( 32) 00:07:56.140 9074.215 - 9124.628: 92.7853% ( 32) 00:07:56.140 9124.628 - 9175.040: 92.9806% ( 33) 00:07:56.140 9175.040 - 9225.452: 93.1641% ( 31) 00:07:56.140 9225.452 - 9275.865: 93.3239% ( 27) 00:07:56.140 9275.865 - 9326.277: 93.4659% ( 24) 00:07:56.140 9326.277 - 9376.689: 93.5961% ( 22) 00:07:56.140 9376.689 - 9427.102: 93.7382% ( 24) 00:07:56.140 9427.102 - 9477.514: 93.8743% ( 23) 00:07:56.141 9477.514 - 9527.926: 93.9808% ( 18) 00:07:56.141 9527.926 - 9578.338: 94.0933% ( 19) 00:07:56.141 9578.338 - 9628.751: 94.2176% ( 21) 00:07:56.141 9628.751 - 9679.163: 94.3419% ( 21) 00:07:56.141 9679.163 - 9729.575: 94.4661% ( 21) 00:07:56.141 9729.575 - 9779.988: 94.6023% ( 23) 00:07:56.141 9779.988 - 9830.400: 94.7088% ( 18) 00:07:56.141 9830.400 - 9880.812: 94.8390% ( 22) 00:07:56.141 9880.812 - 9931.225: 94.9455% ( 18) 00:07:56.141 9931.225 - 9981.637: 95.0462% ( 17) 00:07:56.141 9981.637 - 10032.049: 95.1290% ( 14) 00:07:56.141 10032.049 - 10082.462: 95.2060% ( 13) 00:07:56.141 10082.462 - 10132.874: 95.3184% ( 19) 00:07:56.141 10132.874 - 10183.286: 95.4013% ( 14) 00:07:56.141 10183.286 - 10233.698: 95.4664% ( 11) 00:07:56.141 10233.698 - 10284.111: 95.5315% ( 11) 00:07:56.141 10284.111 - 10334.523: 95.5966% ( 11) 00:07:56.141 10334.523 - 10384.935: 95.6735% ( 13) 00:07:56.141 10384.935 - 10435.348: 95.7327% ( 10) 00:07:56.141 10435.348 - 10485.760: 95.8097% ( 13) 00:07:56.141 10485.760 - 10536.172: 95.8688% ( 10) 00:07:56.141 10536.172 - 10586.585: 95.9517% ( 14) 00:07:56.141 10586.585 - 10636.997: 96.0050% ( 9) 00:07:56.141 10636.997 - 10687.409: 96.0819% ( 13) 00:07:56.141 10687.409 - 10737.822: 96.1470% ( 11) 00:07:56.141 10737.822 - 10788.234: 96.2417% ( 16) 00:07:56.141 10788.234 - 10838.646: 96.3127% ( 12) 00:07:56.141 10838.646 - 10889.058: 96.4193% ( 18) 00:07:56.141 10889.058 - 10939.471: 96.4844% ( 11) 00:07:56.141 10939.471 - 10989.883: 96.5909% ( 18) 00:07:56.141 10989.883 - 11040.295: 96.6679% ( 13) 00:07:56.141 11040.295 - 11090.708: 96.7330% ( 11) 00:07:56.141 11090.708 - 11141.120: 96.8040% ( 12) 00:07:56.141 11141.120 - 11191.532: 96.9046% ( 17) 00:07:56.141 11191.532 - 11241.945: 96.9756% ( 12) 00:07:56.141 11241.945 - 11292.357: 97.0762% ( 17) 00:07:56.141 11292.357 - 11342.769: 97.1887% ( 19) 00:07:56.141 11342.769 - 11393.182: 97.2715% ( 14) 00:07:56.141 11393.182 - 11443.594: 97.3544% ( 14) 00:07:56.141 11443.594 - 11494.006: 97.4728% ( 20) 00:07:56.141 11494.006 - 11544.418: 97.5556% ( 14) 00:07:56.141 11544.418 - 11594.831: 97.6799% ( 21) 00:07:56.141 11594.831 - 11645.243: 97.7924% ( 19) 00:07:56.141 11645.243 - 11695.655: 97.8871% ( 16) 00:07:56.141 11695.655 - 11746.068: 97.9640% ( 13) 00:07:56.141 11746.068 - 11796.480: 98.0291% ( 11) 00:07:56.141 11796.480 - 11846.892: 98.1179% ( 15) 00:07:56.141 11846.892 - 11897.305: 98.1889% ( 12) 00:07:56.141 11897.305 - 11947.717: 98.2718% ( 14) 00:07:56.141 11947.717 - 11998.129: 98.3487% ( 13) 00:07:56.141 11998.129 - 12048.542: 98.4079% ( 10) 00:07:56.141 12048.542 - 12098.954: 98.5144% ( 18) 00:07:56.141 12098.954 - 12149.366: 98.5736% ( 10) 00:07:56.141 12149.366 - 12199.778: 98.6506% ( 13) 00:07:56.141 12199.778 - 12250.191: 98.7216% ( 12) 00:07:56.141 12250.191 - 12300.603: 98.7808% ( 10) 00:07:56.141 12300.603 - 12351.015: 98.8340% ( 9) 00:07:56.141 12351.015 - 12401.428: 98.8991% ( 11) 00:07:56.141 12401.428 - 12451.840: 98.9524% ( 9) 00:07:56.141 12451.840 - 12502.252: 98.9998% ( 8) 00:07:56.141 12502.252 - 12552.665: 99.0471% ( 8) 00:07:56.141 12552.665 - 12603.077: 99.0945% ( 8) 00:07:56.141 12603.077 - 12653.489: 99.1181% ( 4) 00:07:56.141 12653.489 - 12703.902: 99.1359% ( 3) 00:07:56.141 12703.902 - 12754.314: 99.1655% ( 5) 00:07:56.141 12754.314 - 12804.726: 99.1892% ( 4) 00:07:56.141 12804.726 - 12855.138: 99.2010% ( 2) 00:07:56.141 12855.138 - 12905.551: 99.2128% ( 2) 00:07:56.141 12905.551 - 13006.375: 99.2306% ( 3) 00:07:56.141 13006.375 - 13107.200: 99.2424% ( 2) 00:07:56.141 21374.818 - 21475.643: 99.2483% ( 1) 00:07:56.141 21475.643 - 21576.468: 99.2602% ( 2) 00:07:56.141 21576.468 - 21677.292: 99.2839% ( 4) 00:07:56.141 21677.292 - 21778.117: 99.3016% ( 3) 00:07:56.141 21778.117 - 21878.942: 99.3253% ( 4) 00:07:56.141 21878.942 - 21979.766: 99.3430% ( 3) 00:07:56.141 21979.766 - 22080.591: 99.3667% ( 4) 00:07:56.141 22080.591 - 22181.415: 99.3904% ( 4) 00:07:56.141 22181.415 - 22282.240: 99.4081% ( 3) 00:07:56.141 22282.240 - 22383.065: 99.4259% ( 3) 00:07:56.141 22383.065 - 22483.889: 99.4437% ( 3) 00:07:56.141 22483.889 - 22584.714: 99.4673% ( 4) 00:07:56.141 22584.714 - 22685.538: 99.4851% ( 3) 00:07:56.141 22685.538 - 22786.363: 99.5088% ( 4) 00:07:56.141 22786.363 - 22887.188: 99.5324% ( 4) 00:07:56.141 22887.188 - 22988.012: 99.5443% ( 2) 00:07:56.141 22988.012 - 23088.837: 99.5739% ( 5) 00:07:56.141 23088.837 - 23189.662: 99.5975% ( 4) 00:07:56.141 23189.662 - 23290.486: 99.6153% ( 3) 00:07:56.141 23290.486 - 23391.311: 99.6212% ( 1) 00:07:56.141 27424.295 - 27625.945: 99.6449% ( 4) 00:07:56.141 27625.945 - 27827.594: 99.6922% ( 8) 00:07:56.141 27827.594 - 28029.243: 99.7277% ( 6) 00:07:56.141 28029.243 - 28230.892: 99.7692% ( 7) 00:07:56.141 28230.892 - 28432.542: 99.8106% ( 7) 00:07:56.141 28432.542 - 28634.191: 99.8520% ( 7) 00:07:56.141 28634.191 - 28835.840: 99.8935% ( 7) 00:07:56.141 28835.840 - 29037.489: 99.9408% ( 8) 00:07:56.141 29037.489 - 29239.138: 99.9822% ( 7) 00:07:56.141 29239.138 - 29440.788: 100.0000% ( 3) 00:07:56.141 00:07:56.141 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:56.141 ============================================================================== 00:07:56.141 Range in us Cumulative IO count 00:07:56.141 5444.529 - 5469.735: 0.0118% ( 2) 00:07:56.141 5469.735 - 5494.942: 0.0178% ( 1) 00:07:56.141 5494.942 - 5520.148: 0.0296% ( 2) 00:07:56.141 5520.148 - 5545.354: 0.0769% ( 8) 00:07:56.141 5545.354 - 5570.560: 0.1776% ( 17) 00:07:56.141 5570.560 - 5595.766: 0.2308% ( 9) 00:07:56.141 5595.766 - 5620.972: 0.2723% ( 7) 00:07:56.141 5620.972 - 5646.178: 0.3492% ( 13) 00:07:56.141 5646.178 - 5671.385: 0.4261% ( 13) 00:07:56.141 5671.385 - 5696.591: 0.5149% ( 15) 00:07:56.141 5696.591 - 5721.797: 0.6688% ( 26) 00:07:56.141 5721.797 - 5747.003: 0.8049% ( 23) 00:07:56.141 5747.003 - 5772.209: 1.0062% ( 34) 00:07:56.141 5772.209 - 5797.415: 1.1837% ( 30) 00:07:56.141 5797.415 - 5822.622: 1.3790% ( 33) 00:07:56.141 5822.622 - 5847.828: 1.6217% ( 41) 00:07:56.141 5847.828 - 5873.034: 1.8407% ( 37) 00:07:56.141 5873.034 - 5898.240: 2.1603% ( 54) 00:07:56.141 5898.240 - 5923.446: 2.4503% ( 49) 00:07:56.141 5923.446 - 5948.652: 2.7462% ( 50) 00:07:56.141 5948.652 - 5973.858: 3.0303% ( 48) 00:07:56.141 5973.858 - 5999.065: 3.2907% ( 44) 00:07:56.141 5999.065 - 6024.271: 3.6162% ( 55) 00:07:56.141 6024.271 - 6049.477: 3.9299% ( 53) 00:07:56.141 6049.477 - 6074.683: 4.3561% ( 72) 00:07:56.141 6074.683 - 6099.889: 4.7704% ( 70) 00:07:56.141 6099.889 - 6125.095: 5.1432% ( 63) 00:07:56.141 6125.095 - 6150.302: 5.4806% ( 57) 00:07:56.141 6150.302 - 6175.508: 5.8357% ( 60) 00:07:56.141 6175.508 - 6200.714: 6.2737% ( 74) 00:07:56.141 6200.714 - 6225.920: 6.7649% ( 83) 00:07:56.141 6225.920 - 6251.126: 7.2384% ( 80) 00:07:56.141 6251.126 - 6276.332: 7.7652% ( 89) 00:07:56.141 6276.332 - 6301.538: 8.2978% ( 90) 00:07:56.141 6301.538 - 6326.745: 8.9134% ( 104) 00:07:56.141 6326.745 - 6351.951: 9.4401% ( 89) 00:07:56.141 6351.951 - 6377.157: 10.0556% ( 104) 00:07:56.141 6377.157 - 6402.363: 10.7126% ( 111) 00:07:56.141 6402.363 - 6427.569: 11.3518% ( 108) 00:07:56.141 6427.569 - 6452.775: 11.9792% ( 106) 00:07:56.141 6452.775 - 6503.188: 13.3168% ( 226) 00:07:56.141 6503.188 - 6553.600: 14.7313% ( 239) 00:07:56.141 6553.600 - 6604.012: 16.2464% ( 256) 00:07:56.141 6604.012 - 6654.425: 17.7498% ( 254) 00:07:56.141 6654.425 - 6704.837: 19.3004% ( 262) 00:07:56.141 6704.837 - 6755.249: 20.9103% ( 272) 00:07:56.141 6755.249 - 6805.662: 22.6326% ( 291) 00:07:56.141 6805.662 - 6856.074: 24.4437% ( 306) 00:07:56.141 6856.074 - 6906.486: 26.3731% ( 326) 00:07:56.141 6906.486 - 6956.898: 28.5748% ( 372) 00:07:56.141 6956.898 - 7007.311: 30.8535% ( 385) 00:07:56.141 7007.311 - 7057.723: 33.3925% ( 429) 00:07:56.141 7057.723 - 7108.135: 36.0085% ( 442) 00:07:56.141 7108.135 - 7158.548: 38.7902% ( 470) 00:07:56.141 7158.548 - 7208.960: 41.7318% ( 497) 00:07:56.141 7208.960 - 7259.372: 44.7917% ( 517) 00:07:56.141 7259.372 - 7309.785: 47.9048% ( 526) 00:07:56.141 7309.785 - 7360.197: 51.0298% ( 528) 00:07:56.141 7360.197 - 7410.609: 54.0956% ( 518) 00:07:56.141 7410.609 - 7461.022: 57.0431% ( 498) 00:07:56.141 7461.022 - 7511.434: 59.9491% ( 491) 00:07:56.141 7511.434 - 7561.846: 62.7604% ( 475) 00:07:56.141 7561.846 - 7612.258: 65.4297% ( 451) 00:07:56.141 7612.258 - 7662.671: 67.9273% ( 422) 00:07:56.141 7662.671 - 7713.083: 70.2652% ( 395) 00:07:56.141 7713.083 - 7763.495: 72.4254% ( 365) 00:07:56.141 7763.495 - 7813.908: 74.4732% ( 346) 00:07:56.141 7813.908 - 7864.320: 76.3554% ( 318) 00:07:56.141 7864.320 - 7914.732: 78.0599% ( 288) 00:07:56.141 7914.732 - 7965.145: 79.6520% ( 269) 00:07:56.141 7965.145 - 8015.557: 81.0784% ( 241) 00:07:56.141 8015.557 - 8065.969: 82.3449% ( 214) 00:07:56.141 8065.969 - 8116.382: 83.5523% ( 204) 00:07:56.141 8116.382 - 8166.794: 84.6295% ( 182) 00:07:56.141 8166.794 - 8217.206: 85.6416% ( 171) 00:07:56.141 8217.206 - 8267.618: 86.5826% ( 159) 00:07:56.141 8267.618 - 8318.031: 87.4053% ( 139) 00:07:56.141 8318.031 - 8368.443: 88.1688% ( 129) 00:07:56.141 8368.443 - 8418.855: 88.9027% ( 124) 00:07:56.141 8418.855 - 8469.268: 89.4413% ( 91) 00:07:56.141 8469.268 - 8519.680: 89.9503% ( 86) 00:07:56.141 8519.680 - 8570.092: 90.4415% ( 83) 00:07:56.141 8570.092 - 8620.505: 90.9091% ( 79) 00:07:56.141 8620.505 - 8670.917: 91.2760% ( 62) 00:07:56.141 8670.917 - 8721.329: 91.5661% ( 49) 00:07:56.141 8721.329 - 8771.742: 91.8265% ( 44) 00:07:56.142 8771.742 - 8822.154: 92.0218% ( 33) 00:07:56.142 8822.154 - 8872.566: 92.1638% ( 24) 00:07:56.142 8872.566 - 8922.978: 92.3355% ( 29) 00:07:56.142 8922.978 - 8973.391: 92.5130% ( 30) 00:07:56.142 8973.391 - 9023.803: 92.6787% ( 28) 00:07:56.142 9023.803 - 9074.215: 92.8445% ( 28) 00:07:56.142 9074.215 - 9124.628: 92.9392% ( 16) 00:07:56.142 9124.628 - 9175.040: 93.0339% ( 16) 00:07:56.142 9175.040 - 9225.452: 93.1345% ( 17) 00:07:56.142 9225.452 - 9275.865: 93.2292% ( 16) 00:07:56.142 9275.865 - 9326.277: 93.3357% ( 18) 00:07:56.142 9326.277 - 9376.689: 93.4363% ( 17) 00:07:56.142 9376.689 - 9427.102: 93.5547% ( 20) 00:07:56.142 9427.102 - 9477.514: 93.6731% ( 20) 00:07:56.142 9477.514 - 9527.926: 93.7914% ( 20) 00:07:56.142 9527.926 - 9578.338: 93.9216% ( 22) 00:07:56.142 9578.338 - 9628.751: 94.0459% ( 21) 00:07:56.142 9628.751 - 9679.163: 94.1761% ( 22) 00:07:56.142 9679.163 - 9729.575: 94.3241% ( 25) 00:07:56.142 9729.575 - 9779.988: 94.4306% ( 18) 00:07:56.142 9779.988 - 9830.400: 94.5549% ( 21) 00:07:56.142 9830.400 - 9880.812: 94.6792% ( 21) 00:07:56.142 9880.812 - 9931.225: 94.7857% ( 18) 00:07:56.142 9931.225 - 9981.637: 94.8864% ( 17) 00:07:56.142 9981.637 - 10032.049: 94.9870% ( 17) 00:07:56.142 10032.049 - 10082.462: 95.0758% ( 15) 00:07:56.142 10082.462 - 10132.874: 95.1527% ( 13) 00:07:56.142 10132.874 - 10183.286: 95.2178% ( 11) 00:07:56.142 10183.286 - 10233.698: 95.2947% ( 13) 00:07:56.142 10233.698 - 10284.111: 95.3658% ( 12) 00:07:56.142 10284.111 - 10334.523: 95.4427% ( 13) 00:07:56.142 10334.523 - 10384.935: 95.5315% ( 15) 00:07:56.142 10384.935 - 10435.348: 95.6203% ( 15) 00:07:56.142 10435.348 - 10485.760: 95.7150% ( 16) 00:07:56.142 10485.760 - 10536.172: 95.8156% ( 17) 00:07:56.142 10536.172 - 10586.585: 95.9162% ( 17) 00:07:56.142 10586.585 - 10636.997: 95.9872% ( 12) 00:07:56.142 10636.997 - 10687.409: 96.0405% ( 9) 00:07:56.142 10687.409 - 10737.822: 96.0938% ( 9) 00:07:56.142 10737.822 - 10788.234: 96.1589% ( 11) 00:07:56.142 10788.234 - 10838.646: 96.2299% ( 12) 00:07:56.142 10838.646 - 10889.058: 96.3009% ( 12) 00:07:56.142 10889.058 - 10939.471: 96.3778% ( 13) 00:07:56.142 10939.471 - 10989.883: 96.4607% ( 14) 00:07:56.142 10989.883 - 11040.295: 96.5436% ( 14) 00:07:56.142 11040.295 - 11090.708: 96.6087% ( 11) 00:07:56.142 11090.708 - 11141.120: 96.7152% ( 18) 00:07:56.142 11141.120 - 11191.532: 96.8158% ( 17) 00:07:56.142 11191.532 - 11241.945: 96.9105% ( 16) 00:07:56.142 11241.945 - 11292.357: 96.9815% ( 12) 00:07:56.142 11292.357 - 11342.769: 97.0940% ( 19) 00:07:56.142 11342.769 - 11393.182: 97.1709% ( 13) 00:07:56.142 11393.182 - 11443.594: 97.2242% ( 9) 00:07:56.142 11443.594 - 11494.006: 97.2834% ( 10) 00:07:56.142 11494.006 - 11544.418: 97.3485% ( 11) 00:07:56.142 11544.418 - 11594.831: 97.4077% ( 10) 00:07:56.142 11594.831 - 11645.243: 97.4846% ( 13) 00:07:56.142 11645.243 - 11695.655: 97.5675% ( 14) 00:07:56.142 11695.655 - 11746.068: 97.6858% ( 20) 00:07:56.142 11746.068 - 11796.480: 97.7865% ( 17) 00:07:56.142 11796.480 - 11846.892: 97.8752% ( 15) 00:07:56.142 11846.892 - 11897.305: 97.9699% ( 16) 00:07:56.142 11897.305 - 11947.717: 98.0646% ( 16) 00:07:56.142 11947.717 - 11998.129: 98.1593% ( 16) 00:07:56.142 11998.129 - 12048.542: 98.2363% ( 13) 00:07:56.142 12048.542 - 12098.954: 98.3250% ( 15) 00:07:56.142 12098.954 - 12149.366: 98.4020% ( 13) 00:07:56.142 12149.366 - 12199.778: 98.4848% ( 14) 00:07:56.142 12199.778 - 12250.191: 98.5677% ( 14) 00:07:56.142 12250.191 - 12300.603: 98.6506% ( 14) 00:07:56.142 12300.603 - 12351.015: 98.7334% ( 14) 00:07:56.142 12351.015 - 12401.428: 98.8163% ( 14) 00:07:56.142 12401.428 - 12451.840: 98.8932% ( 13) 00:07:56.142 12451.840 - 12502.252: 98.9820% ( 15) 00:07:56.142 12502.252 - 12552.665: 99.0471% ( 11) 00:07:56.142 12552.665 - 12603.077: 99.0945% ( 8) 00:07:56.142 12603.077 - 12653.489: 99.1359% ( 7) 00:07:56.142 12653.489 - 12703.902: 99.1596% ( 4) 00:07:56.142 12703.902 - 12754.314: 99.1892% ( 5) 00:07:56.142 12754.314 - 12804.726: 99.2010% ( 2) 00:07:56.142 12804.726 - 12855.138: 99.2128% ( 2) 00:07:56.142 12855.138 - 12905.551: 99.2247% ( 2) 00:07:56.142 12905.551 - 13006.375: 99.2424% ( 3) 00:07:56.142 20467.397 - 20568.222: 99.2543% ( 2) 00:07:56.142 20568.222 - 20669.046: 99.2779% ( 4) 00:07:56.142 20669.046 - 20769.871: 99.3016% ( 4) 00:07:56.142 20769.871 - 20870.695: 99.3194% ( 3) 00:07:56.142 20870.695 - 20971.520: 99.3430% ( 4) 00:07:56.142 20971.520 - 21072.345: 99.3667% ( 4) 00:07:56.142 21072.345 - 21173.169: 99.3904% ( 4) 00:07:56.142 21173.169 - 21273.994: 99.4141% ( 4) 00:07:56.142 21273.994 - 21374.818: 99.4318% ( 3) 00:07:56.142 21374.818 - 21475.643: 99.4555% ( 4) 00:07:56.142 21475.643 - 21576.468: 99.4792% ( 4) 00:07:56.142 21576.468 - 21677.292: 99.5028% ( 4) 00:07:56.142 21677.292 - 21778.117: 99.5265% ( 4) 00:07:56.142 21778.117 - 21878.942: 99.5443% ( 3) 00:07:56.142 21878.942 - 21979.766: 99.5679% ( 4) 00:07:56.142 21979.766 - 22080.591: 99.5916% ( 4) 00:07:56.142 22080.591 - 22181.415: 99.6153% ( 4) 00:07:56.142 22181.415 - 22282.240: 99.6212% ( 1) 00:07:56.142 26012.751 - 26214.400: 99.6567% ( 6) 00:07:56.142 26214.400 - 26416.049: 99.6982% ( 7) 00:07:56.142 26416.049 - 26617.698: 99.7396% ( 7) 00:07:56.142 26617.698 - 26819.348: 99.7869% ( 8) 00:07:56.142 26819.348 - 27020.997: 99.8343% ( 8) 00:07:56.142 27020.997 - 27222.646: 99.8757% ( 7) 00:07:56.142 27222.646 - 27424.295: 99.9231% ( 8) 00:07:56.142 27424.295 - 27625.945: 99.9645% ( 7) 00:07:56.142 27625.945 - 27827.594: 100.0000% ( 6) 00:07:56.142 00:07:56.142 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:56.142 ============================================================================== 00:07:56.142 Range in us Cumulative IO count 00:07:56.142 5444.529 - 5469.735: 0.0118% ( 2) 00:07:56.142 5469.735 - 5494.942: 0.0414% ( 5) 00:07:56.142 5494.942 - 5520.148: 0.0651% ( 4) 00:07:56.142 5520.148 - 5545.354: 0.1065% ( 7) 00:07:56.142 5545.354 - 5570.560: 0.1420% ( 6) 00:07:56.142 5570.560 - 5595.766: 0.1894% ( 8) 00:07:56.142 5595.766 - 5620.972: 0.2427% ( 9) 00:07:56.142 5620.972 - 5646.178: 0.3255% ( 14) 00:07:56.142 5646.178 - 5671.385: 0.4202% ( 16) 00:07:56.142 5671.385 - 5696.591: 0.5682% ( 25) 00:07:56.142 5696.591 - 5721.797: 0.7280% ( 27) 00:07:56.142 5721.797 - 5747.003: 0.9292% ( 34) 00:07:56.142 5747.003 - 5772.209: 1.1186% ( 32) 00:07:56.142 5772.209 - 5797.415: 1.2843% ( 28) 00:07:56.142 5797.415 - 5822.622: 1.5033% ( 37) 00:07:56.142 5822.622 - 5847.828: 1.7164% ( 36) 00:07:56.142 5847.828 - 5873.034: 1.9472% ( 39) 00:07:56.142 5873.034 - 5898.240: 2.2017% ( 43) 00:07:56.142 5898.240 - 5923.446: 2.4858% ( 48) 00:07:56.142 5923.446 - 5948.652: 2.8587% ( 63) 00:07:56.142 5948.652 - 5973.858: 3.1546% ( 50) 00:07:56.142 5973.858 - 5999.065: 3.4446% ( 49) 00:07:56.142 5999.065 - 6024.271: 3.7287% ( 48) 00:07:56.142 6024.271 - 6049.477: 4.0128% ( 48) 00:07:56.142 6049.477 - 6074.683: 4.3265% ( 53) 00:07:56.142 6074.683 - 6099.889: 4.7053% ( 64) 00:07:56.142 6099.889 - 6125.095: 5.0604% ( 60) 00:07:56.142 6125.095 - 6150.302: 5.4392% ( 64) 00:07:56.142 6150.302 - 6175.508: 5.8712% ( 73) 00:07:56.142 6175.508 - 6200.714: 6.3743% ( 85) 00:07:56.142 6200.714 - 6225.920: 6.8419% ( 79) 00:07:56.142 6225.920 - 6251.126: 7.3272% ( 82) 00:07:56.142 6251.126 - 6276.332: 7.9605% ( 107) 00:07:56.142 6276.332 - 6301.538: 8.5997% ( 108) 00:07:56.142 6301.538 - 6326.745: 9.1915% ( 100) 00:07:56.142 6326.745 - 6351.951: 9.7479% ( 94) 00:07:56.142 6351.951 - 6377.157: 10.3338% ( 99) 00:07:56.142 6377.157 - 6402.363: 10.9079% ( 97) 00:07:56.142 6402.363 - 6427.569: 11.4938% ( 99) 00:07:56.142 6427.569 - 6452.775: 12.1271% ( 107) 00:07:56.142 6452.775 - 6503.188: 13.4647% ( 226) 00:07:56.142 6503.188 - 6553.600: 14.8201% ( 229) 00:07:56.142 6553.600 - 6604.012: 16.2524% ( 242) 00:07:56.142 6604.012 - 6654.425: 17.7557% ( 254) 00:07:56.142 6654.425 - 6704.837: 19.3123% ( 263) 00:07:56.142 6704.837 - 6755.249: 20.8452% ( 259) 00:07:56.142 6755.249 - 6805.662: 22.5675% ( 291) 00:07:56.142 6805.662 - 6856.074: 24.3490% ( 301) 00:07:56.142 6856.074 - 6906.486: 26.3968% ( 346) 00:07:56.142 6906.486 - 6956.898: 28.5393% ( 362) 00:07:56.142 6956.898 - 7007.311: 30.6996% ( 365) 00:07:56.142 7007.311 - 7057.723: 33.1084% ( 407) 00:07:56.142 7057.723 - 7108.135: 35.7481% ( 446) 00:07:56.142 7108.135 - 7158.548: 38.5949% ( 481) 00:07:56.142 7158.548 - 7208.960: 41.6430% ( 515) 00:07:56.142 7208.960 - 7259.372: 44.8094% ( 535) 00:07:56.142 7259.372 - 7309.785: 48.0410% ( 546) 00:07:56.142 7309.785 - 7360.197: 51.0713% ( 512) 00:07:56.142 7360.197 - 7410.609: 54.1489% ( 520) 00:07:56.142 7410.609 - 7461.022: 57.0372% ( 488) 00:07:56.142 7461.022 - 7511.434: 59.8189% ( 470) 00:07:56.142 7511.434 - 7561.846: 62.5651% ( 464) 00:07:56.142 7561.846 - 7612.258: 65.2580% ( 455) 00:07:56.142 7612.258 - 7662.671: 67.8918% ( 445) 00:07:56.142 7662.671 - 7713.083: 70.3243% ( 411) 00:07:56.142 7713.083 - 7763.495: 72.5971% ( 384) 00:07:56.142 7763.495 - 7813.908: 74.6804% ( 352) 00:07:56.142 7813.908 - 7864.320: 76.6809% ( 338) 00:07:56.142 7864.320 - 7914.732: 78.5985% ( 324) 00:07:56.142 7914.732 - 7965.145: 80.2143% ( 273) 00:07:56.142 7965.145 - 8015.557: 81.5874% ( 232) 00:07:56.142 8015.557 - 8065.969: 82.7711% ( 200) 00:07:56.142 8065.969 - 8116.382: 83.8482% ( 182) 00:07:56.142 8116.382 - 8166.794: 84.8781% ( 174) 00:07:56.142 8166.794 - 8217.206: 85.7895% ( 154) 00:07:56.142 8217.206 - 8267.618: 86.6832% ( 151) 00:07:56.142 8267.618 - 8318.031: 87.5710% ( 150) 00:07:56.142 8318.031 - 8368.443: 88.2931% ( 122) 00:07:56.142 8368.443 - 8418.855: 88.9323% ( 108) 00:07:56.143 8418.855 - 8469.268: 89.4413% ( 86) 00:07:56.143 8469.268 - 8519.680: 89.9148% ( 80) 00:07:56.143 8519.680 - 8570.092: 90.3646% ( 76) 00:07:56.143 8570.092 - 8620.505: 90.7552% ( 66) 00:07:56.143 8620.505 - 8670.917: 91.1044% ( 59) 00:07:56.143 8670.917 - 8721.329: 91.4240% ( 54) 00:07:56.143 8721.329 - 8771.742: 91.7140% ( 49) 00:07:56.143 8771.742 - 8822.154: 91.9508% ( 40) 00:07:56.143 8822.154 - 8872.566: 92.1638% ( 36) 00:07:56.143 8872.566 - 8922.978: 92.3236% ( 27) 00:07:56.143 8922.978 - 8973.391: 92.4361% ( 19) 00:07:56.143 8973.391 - 9023.803: 92.5426% ( 18) 00:07:56.143 9023.803 - 9074.215: 92.6373% ( 16) 00:07:56.143 9074.215 - 9124.628: 92.7261% ( 15) 00:07:56.143 9124.628 - 9175.040: 92.8681% ( 24) 00:07:56.143 9175.040 - 9225.452: 92.9865% ( 20) 00:07:56.143 9225.452 - 9275.865: 93.1049% ( 20) 00:07:56.143 9275.865 - 9326.277: 93.2232% ( 20) 00:07:56.143 9326.277 - 9376.689: 93.3594% ( 23) 00:07:56.143 9376.689 - 9427.102: 93.4482% ( 15) 00:07:56.143 9427.102 - 9477.514: 93.5488% ( 17) 00:07:56.143 9477.514 - 9527.926: 93.6553% ( 18) 00:07:56.143 9527.926 - 9578.338: 93.7678% ( 19) 00:07:56.143 9578.338 - 9628.751: 93.8684% ( 17) 00:07:56.143 9628.751 - 9679.163: 93.9808% ( 19) 00:07:56.143 9679.163 - 9729.575: 94.1051% ( 21) 00:07:56.143 9729.575 - 9779.988: 94.2294% ( 21) 00:07:56.143 9779.988 - 9830.400: 94.3596% ( 22) 00:07:56.143 9830.400 - 9880.812: 94.5135% ( 26) 00:07:56.143 9880.812 - 9931.225: 94.6733% ( 27) 00:07:56.143 9931.225 - 9981.637: 94.8272% ( 26) 00:07:56.143 9981.637 - 10032.049: 94.9633% ( 23) 00:07:56.143 10032.049 - 10082.462: 95.0580% ( 16) 00:07:56.143 10082.462 - 10132.874: 95.1586% ( 17) 00:07:56.143 10132.874 - 10183.286: 95.2592% ( 17) 00:07:56.143 10183.286 - 10233.698: 95.3421% ( 14) 00:07:56.143 10233.698 - 10284.111: 95.4131% ( 12) 00:07:56.143 10284.111 - 10334.523: 95.5019% ( 15) 00:07:56.143 10334.523 - 10384.935: 95.5848% ( 14) 00:07:56.143 10384.935 - 10435.348: 95.6558% ( 12) 00:07:56.143 10435.348 - 10485.760: 95.7446% ( 15) 00:07:56.143 10485.760 - 10536.172: 95.8333% ( 15) 00:07:56.143 10536.172 - 10586.585: 95.9399% ( 18) 00:07:56.143 10586.585 - 10636.997: 96.0405% ( 17) 00:07:56.143 10636.997 - 10687.409: 96.1411% ( 17) 00:07:56.143 10687.409 - 10737.822: 96.2180% ( 13) 00:07:56.143 10737.822 - 10788.234: 96.3009% ( 14) 00:07:56.143 10788.234 - 10838.646: 96.3897% ( 15) 00:07:56.143 10838.646 - 10889.058: 96.4666% ( 13) 00:07:56.143 10889.058 - 10939.471: 96.5495% ( 14) 00:07:56.143 10939.471 - 10989.883: 96.6383% ( 15) 00:07:56.143 10989.883 - 11040.295: 96.7270% ( 15) 00:07:56.143 11040.295 - 11090.708: 96.8040% ( 13) 00:07:56.143 11090.708 - 11141.120: 96.8928% ( 15) 00:07:56.143 11141.120 - 11191.532: 96.9638% ( 12) 00:07:56.143 11191.532 - 11241.945: 97.0466% ( 14) 00:07:56.143 11241.945 - 11292.357: 97.1236% ( 13) 00:07:56.143 11292.357 - 11342.769: 97.1887% ( 11) 00:07:56.143 11342.769 - 11393.182: 97.2656% ( 13) 00:07:56.143 11393.182 - 11443.594: 97.3426% ( 13) 00:07:56.143 11443.594 - 11494.006: 97.4195% ( 13) 00:07:56.143 11494.006 - 11544.418: 97.4964% ( 13) 00:07:56.143 11544.418 - 11594.831: 97.5911% ( 16) 00:07:56.143 11594.831 - 11645.243: 97.6740% ( 14) 00:07:56.143 11645.243 - 11695.655: 97.7450% ( 12) 00:07:56.143 11695.655 - 11746.068: 97.8042% ( 10) 00:07:56.143 11746.068 - 11796.480: 97.8634% ( 10) 00:07:56.143 11796.480 - 11846.892: 97.9344% ( 12) 00:07:56.143 11846.892 - 11897.305: 97.9877% ( 9) 00:07:56.143 11897.305 - 11947.717: 98.0469% ( 10) 00:07:56.143 11947.717 - 11998.129: 98.1179% ( 12) 00:07:56.143 11998.129 - 12048.542: 98.1771% ( 10) 00:07:56.143 12048.542 - 12098.954: 98.2422% ( 11) 00:07:56.143 12098.954 - 12149.366: 98.3073% ( 11) 00:07:56.143 12149.366 - 12199.778: 98.3665% ( 10) 00:07:56.143 12199.778 - 12250.191: 98.4375% ( 12) 00:07:56.143 12250.191 - 12300.603: 98.5026% ( 11) 00:07:56.143 12300.603 - 12351.015: 98.5795% ( 13) 00:07:56.143 12351.015 - 12401.428: 98.6387% ( 10) 00:07:56.143 12401.428 - 12451.840: 98.6920% ( 9) 00:07:56.143 12451.840 - 12502.252: 98.7512% ( 10) 00:07:56.143 12502.252 - 12552.665: 98.8104% ( 10) 00:07:56.143 12552.665 - 12603.077: 98.8636% ( 9) 00:07:56.143 12603.077 - 12653.489: 98.9228% ( 10) 00:07:56.143 12653.489 - 12703.902: 98.9702% ( 8) 00:07:56.143 12703.902 - 12754.314: 99.0234% ( 9) 00:07:56.143 12754.314 - 12804.726: 99.0708% ( 8) 00:07:56.143 12804.726 - 12855.138: 99.1063% ( 6) 00:07:56.143 12855.138 - 12905.551: 99.1418% ( 6) 00:07:56.143 12905.551 - 13006.375: 99.2010% ( 10) 00:07:56.143 13006.375 - 13107.200: 99.2247% ( 4) 00:07:56.143 13107.200 - 13208.025: 99.2424% ( 3) 00:07:56.143 19358.326 - 19459.151: 99.2543% ( 2) 00:07:56.143 19459.151 - 19559.975: 99.2779% ( 4) 00:07:56.143 19559.975 - 19660.800: 99.3016% ( 4) 00:07:56.143 19660.800 - 19761.625: 99.3194% ( 3) 00:07:56.143 19761.625 - 19862.449: 99.3490% ( 5) 00:07:56.143 19862.449 - 19963.274: 99.3726% ( 4) 00:07:56.143 19963.274 - 20064.098: 99.3904% ( 3) 00:07:56.143 20064.098 - 20164.923: 99.4141% ( 4) 00:07:56.143 20164.923 - 20265.748: 99.4377% ( 4) 00:07:56.143 20265.748 - 20366.572: 99.4614% ( 4) 00:07:56.143 20366.572 - 20467.397: 99.4792% ( 3) 00:07:56.143 20467.397 - 20568.222: 99.5028% ( 4) 00:07:56.143 20568.222 - 20669.046: 99.5206% ( 3) 00:07:56.143 20669.046 - 20769.871: 99.5443% ( 4) 00:07:56.143 20769.871 - 20870.695: 99.5679% ( 4) 00:07:56.143 20870.695 - 20971.520: 99.5857% ( 3) 00:07:56.143 20971.520 - 21072.345: 99.6094% ( 4) 00:07:56.143 21072.345 - 21173.169: 99.6212% ( 2) 00:07:56.143 24802.855 - 24903.680: 99.6330% ( 2) 00:07:56.143 24903.680 - 25004.505: 99.6508% ( 3) 00:07:56.143 25004.505 - 25105.329: 99.6745% ( 4) 00:07:56.143 25105.329 - 25206.154: 99.6982% ( 4) 00:07:56.143 25206.154 - 25306.978: 99.7159% ( 3) 00:07:56.143 25306.978 - 25407.803: 99.7396% ( 4) 00:07:56.143 25407.803 - 25508.628: 99.7573% ( 3) 00:07:56.143 25508.628 - 25609.452: 99.7810% ( 4) 00:07:56.143 25609.452 - 25710.277: 99.8047% ( 4) 00:07:56.143 25710.277 - 25811.102: 99.8224% ( 3) 00:07:56.143 25811.102 - 26012.751: 99.8698% ( 8) 00:07:56.143 26012.751 - 26214.400: 99.9112% ( 7) 00:07:56.143 26214.400 - 26416.049: 99.9527% ( 7) 00:07:56.143 26416.049 - 26617.698: 100.0000% ( 8) 00:07:56.143 00:07:56.143 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:56.143 ============================================================================== 00:07:56.143 Range in us Cumulative IO count 00:07:56.143 5469.735 - 5494.942: 0.0059% ( 1) 00:07:56.143 5494.942 - 5520.148: 0.0237% ( 3) 00:07:56.143 5520.148 - 5545.354: 0.0414% ( 3) 00:07:56.143 5545.354 - 5570.560: 0.0888% ( 8) 00:07:56.143 5570.560 - 5595.766: 0.2131% ( 21) 00:07:56.143 5595.766 - 5620.972: 0.2959% ( 14) 00:07:56.143 5620.972 - 5646.178: 0.3965% ( 17) 00:07:56.143 5646.178 - 5671.385: 0.4557% ( 10) 00:07:56.143 5671.385 - 5696.591: 0.5504% ( 16) 00:07:56.143 5696.591 - 5721.797: 0.7161% ( 28) 00:07:56.143 5721.797 - 5747.003: 0.8404% ( 21) 00:07:56.143 5747.003 - 5772.209: 1.0121% ( 29) 00:07:56.143 5772.209 - 5797.415: 1.2370% ( 38) 00:07:56.143 5797.415 - 5822.622: 1.4678% ( 39) 00:07:56.143 5822.622 - 5847.828: 1.7223% ( 43) 00:07:56.143 5847.828 - 5873.034: 1.9354% ( 36) 00:07:56.143 5873.034 - 5898.240: 2.2135% ( 47) 00:07:56.143 5898.240 - 5923.446: 2.5154% ( 51) 00:07:56.143 5923.446 - 5948.652: 2.7699% ( 43) 00:07:56.143 5948.652 - 5973.858: 3.0244% ( 43) 00:07:56.143 5973.858 - 5999.065: 3.2907% ( 45) 00:07:56.143 5999.065 - 6024.271: 3.5807% ( 49) 00:07:56.143 6024.271 - 6049.477: 3.9122% ( 56) 00:07:56.143 6049.477 - 6074.683: 4.2259% ( 53) 00:07:56.143 6074.683 - 6099.889: 4.6106% ( 65) 00:07:56.143 6099.889 - 6125.095: 4.9775% ( 62) 00:07:56.143 6125.095 - 6150.302: 5.3622% ( 65) 00:07:56.143 6150.302 - 6175.508: 5.8475% ( 82) 00:07:56.143 6175.508 - 6200.714: 6.3210% ( 80) 00:07:56.143 6200.714 - 6225.920: 6.8063% ( 82) 00:07:56.143 6225.920 - 6251.126: 7.3213% ( 87) 00:07:56.143 6251.126 - 6276.332: 7.8717% ( 93) 00:07:56.143 6276.332 - 6301.538: 8.4931% ( 105) 00:07:56.143 6301.538 - 6326.745: 9.0613% ( 96) 00:07:56.143 6326.745 - 6351.951: 9.6295% ( 96) 00:07:56.143 6351.951 - 6377.157: 10.2214% ( 100) 00:07:56.143 6377.157 - 6402.363: 10.8310% ( 103) 00:07:56.143 6402.363 - 6427.569: 11.3755% ( 92) 00:07:56.143 6427.569 - 6452.775: 11.9673% ( 100) 00:07:56.143 6452.775 - 6503.188: 13.1688% ( 203) 00:07:56.143 6503.188 - 6553.600: 14.5597% ( 235) 00:07:56.143 6553.600 - 6604.012: 15.9742% ( 239) 00:07:56.143 6604.012 - 6654.425: 17.5012% ( 258) 00:07:56.143 6654.425 - 6704.837: 19.1347% ( 276) 00:07:56.143 6704.837 - 6755.249: 20.6380% ( 254) 00:07:56.143 6755.249 - 6805.662: 22.5142% ( 317) 00:07:56.143 6805.662 - 6856.074: 24.3963% ( 318) 00:07:56.143 6856.074 - 6906.486: 26.4086% ( 340) 00:07:56.143 6906.486 - 6956.898: 28.5275% ( 358) 00:07:56.143 6956.898 - 7007.311: 30.9659% ( 412) 00:07:56.143 7007.311 - 7057.723: 33.4872% ( 426) 00:07:56.143 7057.723 - 7108.135: 36.2334% ( 464) 00:07:56.143 7108.135 - 7158.548: 39.0803% ( 481) 00:07:56.143 7158.548 - 7208.960: 41.9685% ( 488) 00:07:56.143 7208.960 - 7259.372: 45.1113% ( 531) 00:07:56.143 7259.372 - 7309.785: 48.3546% ( 548) 00:07:56.143 7309.785 - 7360.197: 51.4737% ( 527) 00:07:56.143 7360.197 - 7410.609: 54.4448% ( 502) 00:07:56.143 7410.609 - 7461.022: 57.3804% ( 496) 00:07:56.143 7461.022 - 7511.434: 60.1918% ( 475) 00:07:56.143 7511.434 - 7561.846: 62.8018% ( 441) 00:07:56.143 7561.846 - 7612.258: 65.4238% ( 443) 00:07:56.143 7612.258 - 7662.671: 68.0102% ( 437) 00:07:56.143 7662.671 - 7713.083: 70.3303% ( 392) 00:07:56.144 7713.083 - 7763.495: 72.6207% ( 387) 00:07:56.144 7763.495 - 7813.908: 74.7218% ( 355) 00:07:56.144 7813.908 - 7864.320: 76.6335% ( 323) 00:07:56.144 7864.320 - 7914.732: 78.2256% ( 269) 00:07:56.144 7914.732 - 7965.145: 79.6520% ( 241) 00:07:56.144 7965.145 - 8015.557: 80.8712% ( 206) 00:07:56.144 8015.557 - 8065.969: 81.9661% ( 185) 00:07:56.144 8065.969 - 8116.382: 83.1143% ( 194) 00:07:56.144 8116.382 - 8166.794: 84.1264% ( 171) 00:07:56.144 8166.794 - 8217.206: 85.1207% ( 168) 00:07:56.144 8217.206 - 8267.618: 86.0973% ( 165) 00:07:56.144 8267.618 - 8318.031: 86.9792% ( 149) 00:07:56.144 8318.031 - 8368.443: 87.8137% ( 141) 00:07:56.144 8368.443 - 8418.855: 88.5653% ( 127) 00:07:56.144 8418.855 - 8469.268: 89.1868% ( 105) 00:07:56.144 8469.268 - 8519.680: 89.7076% ( 88) 00:07:56.144 8519.680 - 8570.092: 90.1634% ( 77) 00:07:56.144 8570.092 - 8620.505: 90.5599% ( 67) 00:07:56.144 8620.505 - 8670.917: 90.9328% ( 63) 00:07:56.144 8670.917 - 8721.329: 91.3116% ( 64) 00:07:56.144 8721.329 - 8771.742: 91.6312% ( 54) 00:07:56.144 8771.742 - 8822.154: 91.9271% ( 50) 00:07:56.144 8822.154 - 8872.566: 92.1757% ( 42) 00:07:56.144 8872.566 - 8922.978: 92.3828% ( 35) 00:07:56.144 8922.978 - 8973.391: 92.5426% ( 27) 00:07:56.144 8973.391 - 9023.803: 92.7024% ( 27) 00:07:56.144 9023.803 - 9074.215: 92.8267% ( 21) 00:07:56.144 9074.215 - 9124.628: 92.9332% ( 18) 00:07:56.144 9124.628 - 9175.040: 93.0220% ( 15) 00:07:56.144 9175.040 - 9225.452: 93.1167% ( 16) 00:07:56.144 9225.452 - 9275.865: 93.2114% ( 16) 00:07:56.144 9275.865 - 9326.277: 93.3061% ( 16) 00:07:56.144 9326.277 - 9376.689: 93.4186% ( 19) 00:07:56.144 9376.689 - 9427.102: 93.5488% ( 22) 00:07:56.144 9427.102 - 9477.514: 93.6849% ( 23) 00:07:56.144 9477.514 - 9527.926: 93.7973% ( 19) 00:07:56.144 9527.926 - 9578.338: 93.9512% ( 26) 00:07:56.144 9578.338 - 9628.751: 94.1051% ( 26) 00:07:56.144 9628.751 - 9679.163: 94.2708% ( 28) 00:07:56.144 9679.163 - 9729.575: 94.4010% ( 22) 00:07:56.144 9729.575 - 9779.988: 94.5194% ( 20) 00:07:56.144 9779.988 - 9830.400: 94.6200% ( 17) 00:07:56.144 9830.400 - 9880.812: 94.7266% ( 18) 00:07:56.144 9880.812 - 9931.225: 94.8272% ( 17) 00:07:56.144 9931.225 - 9981.637: 94.9337% ( 18) 00:07:56.144 9981.637 - 10032.049: 95.0343% ( 17) 00:07:56.144 10032.049 - 10082.462: 95.1586% ( 21) 00:07:56.144 10082.462 - 10132.874: 95.2533% ( 16) 00:07:56.144 10132.874 - 10183.286: 95.3539% ( 17) 00:07:56.144 10183.286 - 10233.698: 95.4309% ( 13) 00:07:56.144 10233.698 - 10284.111: 95.5137% ( 14) 00:07:56.144 10284.111 - 10334.523: 95.5848% ( 12) 00:07:56.144 10334.523 - 10384.935: 95.6676% ( 14) 00:07:56.144 10384.935 - 10435.348: 95.7623% ( 16) 00:07:56.144 10435.348 - 10485.760: 95.8393% ( 13) 00:07:56.144 10485.760 - 10536.172: 95.9103% ( 12) 00:07:56.144 10536.172 - 10586.585: 95.9576% ( 8) 00:07:56.144 10586.585 - 10636.997: 96.0168% ( 10) 00:07:56.144 10636.997 - 10687.409: 96.0760% ( 10) 00:07:56.144 10687.409 - 10737.822: 96.1411% ( 11) 00:07:56.144 10737.822 - 10788.234: 96.2299% ( 15) 00:07:56.144 10788.234 - 10838.646: 96.3068% ( 13) 00:07:56.144 10838.646 - 10889.058: 96.3778% ( 12) 00:07:56.144 10889.058 - 10939.471: 96.4607% ( 14) 00:07:56.144 10939.471 - 10989.883: 96.5376% ( 13) 00:07:56.144 10989.883 - 11040.295: 96.6146% ( 13) 00:07:56.144 11040.295 - 11090.708: 96.6915% ( 13) 00:07:56.144 11090.708 - 11141.120: 96.7625% ( 12) 00:07:56.144 11141.120 - 11191.532: 96.8158% ( 9) 00:07:56.144 11191.532 - 11241.945: 96.8809% ( 11) 00:07:56.144 11241.945 - 11292.357: 96.9519% ( 12) 00:07:56.144 11292.357 - 11342.769: 97.0289% ( 13) 00:07:56.144 11342.769 - 11393.182: 97.0881% ( 10) 00:07:56.144 11393.182 - 11443.594: 97.1650% ( 13) 00:07:56.144 11443.594 - 11494.006: 97.2420% ( 13) 00:07:56.144 11494.006 - 11544.418: 97.3189% ( 13) 00:07:56.144 11544.418 - 11594.831: 97.3958% ( 13) 00:07:56.144 11594.831 - 11645.243: 97.4669% ( 12) 00:07:56.144 11645.243 - 11695.655: 97.5438% ( 13) 00:07:56.144 11695.655 - 11746.068: 97.5971% ( 9) 00:07:56.144 11746.068 - 11796.480: 97.6858% ( 15) 00:07:56.144 11796.480 - 11846.892: 97.7865% ( 17) 00:07:56.144 11846.892 - 11897.305: 97.8812% ( 16) 00:07:56.144 11897.305 - 11947.717: 97.9699% ( 15) 00:07:56.144 11947.717 - 11998.129: 98.0587% ( 15) 00:07:56.144 11998.129 - 12048.542: 98.1593% ( 17) 00:07:56.144 12048.542 - 12098.954: 98.2304% ( 12) 00:07:56.144 12098.954 - 12149.366: 98.3132% ( 14) 00:07:56.144 12149.366 - 12199.778: 98.3961% ( 14) 00:07:56.144 12199.778 - 12250.191: 98.4671% ( 12) 00:07:56.144 12250.191 - 12300.603: 98.5381% ( 12) 00:07:56.144 12300.603 - 12351.015: 98.5973% ( 10) 00:07:56.144 12351.015 - 12401.428: 98.6624% ( 11) 00:07:56.144 12401.428 - 12451.840: 98.7393% ( 13) 00:07:56.144 12451.840 - 12502.252: 98.8045% ( 11) 00:07:56.144 12502.252 - 12552.665: 98.8636% ( 10) 00:07:56.144 12552.665 - 12603.077: 98.9406% ( 13) 00:07:56.144 12603.077 - 12653.489: 98.9879% ( 8) 00:07:56.144 12653.489 - 12703.902: 99.0175% ( 5) 00:07:56.144 12703.902 - 12754.314: 99.0471% ( 5) 00:07:56.144 12754.314 - 12804.726: 99.0708% ( 4) 00:07:56.144 12804.726 - 12855.138: 99.0945% ( 4) 00:07:56.144 12855.138 - 12905.551: 99.1122% ( 3) 00:07:56.144 12905.551 - 13006.375: 99.1241% ( 2) 00:07:56.144 13006.375 - 13107.200: 99.1418% ( 3) 00:07:56.144 13107.200 - 13208.025: 99.1596% ( 3) 00:07:56.144 13208.025 - 13308.849: 99.1714% ( 2) 00:07:56.144 13308.849 - 13409.674: 99.1892% ( 3) 00:07:56.144 13409.674 - 13510.498: 99.2010% ( 2) 00:07:56.144 13510.498 - 13611.323: 99.2128% ( 2) 00:07:56.144 13611.323 - 13712.148: 99.2306% ( 3) 00:07:56.144 13712.148 - 13812.972: 99.2365% ( 1) 00:07:56.144 13812.972 - 13913.797: 99.2424% ( 1) 00:07:56.144 17745.132 - 17845.957: 99.2602% ( 3) 00:07:56.144 17845.957 - 17946.782: 99.2779% ( 3) 00:07:56.144 17946.782 - 18047.606: 99.3016% ( 4) 00:07:56.144 18047.606 - 18148.431: 99.3253% ( 4) 00:07:56.144 18148.431 - 18249.255: 99.3490% ( 4) 00:07:56.144 18249.255 - 18350.080: 99.3726% ( 4) 00:07:56.144 18350.080 - 18450.905: 99.3963% ( 4) 00:07:56.144 18450.905 - 18551.729: 99.4141% ( 3) 00:07:56.144 18551.729 - 18652.554: 99.4377% ( 4) 00:07:56.144 18652.554 - 18753.378: 99.4614% ( 4) 00:07:56.144 18753.378 - 18854.203: 99.4792% ( 3) 00:07:56.144 18854.203 - 18955.028: 99.5028% ( 4) 00:07:56.144 18955.028 - 19055.852: 99.5265% ( 4) 00:07:56.144 19055.852 - 19156.677: 99.5443% ( 3) 00:07:56.144 19156.677 - 19257.502: 99.5679% ( 4) 00:07:56.144 19257.502 - 19358.326: 99.5916% ( 4) 00:07:56.144 19358.326 - 19459.151: 99.6153% ( 4) 00:07:56.144 19459.151 - 19559.975: 99.6212% ( 1) 00:07:56.144 23189.662 - 23290.486: 99.6330% ( 2) 00:07:56.144 23290.486 - 23391.311: 99.6567% ( 4) 00:07:56.144 23391.311 - 23492.135: 99.6745% ( 3) 00:07:56.144 23492.135 - 23592.960: 99.6982% ( 4) 00:07:56.144 23592.960 - 23693.785: 99.7218% ( 4) 00:07:56.144 23693.785 - 23794.609: 99.7455% ( 4) 00:07:56.144 23794.609 - 23895.434: 99.7692% ( 4) 00:07:56.144 23895.434 - 23996.258: 99.7929% ( 4) 00:07:56.144 23996.258 - 24097.083: 99.8106% ( 3) 00:07:56.144 24097.083 - 24197.908: 99.8343% ( 4) 00:07:56.144 24197.908 - 24298.732: 99.8580% ( 4) 00:07:56.144 24298.732 - 24399.557: 99.8816% ( 4) 00:07:56.144 24399.557 - 24500.382: 99.8994% ( 3) 00:07:56.144 24500.382 - 24601.206: 99.9231% ( 4) 00:07:56.144 24601.206 - 24702.031: 99.9408% ( 3) 00:07:56.144 24702.031 - 24802.855: 99.9645% ( 4) 00:07:56.144 24802.855 - 24903.680: 99.9882% ( 4) 00:07:56.144 24903.680 - 25004.505: 100.0000% ( 2) 00:07:56.144 00:07:56.144 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:56.144 ============================================================================== 00:07:56.144 Range in us Cumulative IO count 00:07:56.144 5444.529 - 5469.735: 0.0059% ( 1) 00:07:56.144 5469.735 - 5494.942: 0.0296% ( 4) 00:07:56.144 5494.942 - 5520.148: 0.0533% ( 4) 00:07:56.144 5520.148 - 5545.354: 0.0769% ( 4) 00:07:56.145 5545.354 - 5570.560: 0.1657% ( 15) 00:07:56.145 5570.560 - 5595.766: 0.2367% ( 12) 00:07:56.145 5595.766 - 5620.972: 0.2723% ( 6) 00:07:56.145 5620.972 - 5646.178: 0.3078% ( 6) 00:07:56.145 5646.178 - 5671.385: 0.3729% ( 11) 00:07:56.145 5671.385 - 5696.591: 0.4616% ( 15) 00:07:56.145 5696.591 - 5721.797: 0.6155% ( 26) 00:07:56.145 5721.797 - 5747.003: 0.8700% ( 43) 00:07:56.145 5747.003 - 5772.209: 1.0772% ( 35) 00:07:56.145 5772.209 - 5797.415: 1.2429% ( 28) 00:07:56.145 5797.415 - 5822.622: 1.4205% ( 30) 00:07:56.145 5822.622 - 5847.828: 1.6276% ( 35) 00:07:56.145 5847.828 - 5873.034: 1.8643% ( 40) 00:07:56.145 5873.034 - 5898.240: 2.1188% ( 43) 00:07:56.145 5898.240 - 5923.446: 2.3793% ( 44) 00:07:56.145 5923.446 - 5948.652: 2.6811% ( 51) 00:07:56.145 5948.652 - 5973.858: 2.9770% ( 50) 00:07:56.145 5973.858 - 5999.065: 3.3085% ( 56) 00:07:56.145 5999.065 - 6024.271: 3.6695% ( 61) 00:07:56.145 6024.271 - 6049.477: 3.9950% ( 55) 00:07:56.145 6049.477 - 6074.683: 4.3028% ( 52) 00:07:56.145 6074.683 - 6099.889: 4.6757% ( 63) 00:07:56.145 6099.889 - 6125.095: 5.0367% ( 61) 00:07:56.145 6125.095 - 6150.302: 5.4510% ( 70) 00:07:56.145 6150.302 - 6175.508: 5.8653% ( 70) 00:07:56.145 6175.508 - 6200.714: 6.2796% ( 70) 00:07:56.145 6200.714 - 6225.920: 6.7472% ( 79) 00:07:56.145 6225.920 - 6251.126: 7.2562% ( 86) 00:07:56.145 6251.126 - 6276.332: 7.7829% ( 89) 00:07:56.145 6276.332 - 6301.538: 8.3156% ( 90) 00:07:56.145 6301.538 - 6326.745: 8.8542% ( 91) 00:07:56.145 6326.745 - 6351.951: 9.4223% ( 96) 00:07:56.145 6351.951 - 6377.157: 10.0379% ( 104) 00:07:56.145 6377.157 - 6402.363: 10.6061% ( 96) 00:07:56.145 6402.363 - 6427.569: 11.2038% ( 101) 00:07:56.145 6427.569 - 6452.775: 11.8430% ( 108) 00:07:56.145 6452.775 - 6503.188: 13.1451% ( 220) 00:07:56.145 6503.188 - 6553.600: 14.3525% ( 204) 00:07:56.145 6553.600 - 6604.012: 15.7256% ( 232) 00:07:56.145 6604.012 - 6654.425: 17.3177% ( 269) 00:07:56.145 6654.425 - 6704.837: 18.9631% ( 278) 00:07:56.145 6704.837 - 6755.249: 20.6913% ( 292) 00:07:56.145 6755.249 - 6805.662: 22.4254% ( 293) 00:07:56.145 6805.662 - 6856.074: 24.3312% ( 322) 00:07:56.145 6856.074 - 6906.486: 26.4678% ( 361) 00:07:56.145 6906.486 - 6956.898: 28.6399% ( 367) 00:07:56.145 6956.898 - 7007.311: 30.9777% ( 395) 00:07:56.145 7007.311 - 7057.723: 33.5701% ( 438) 00:07:56.145 7057.723 - 7108.135: 36.3696% ( 473) 00:07:56.145 7108.135 - 7158.548: 39.2637% ( 489) 00:07:56.145 7158.548 - 7208.960: 42.1697% ( 491) 00:07:56.145 7208.960 - 7259.372: 45.1764% ( 508) 00:07:56.145 7259.372 - 7309.785: 48.1652% ( 505) 00:07:56.145 7309.785 - 7360.197: 51.3613% ( 540) 00:07:56.145 7360.197 - 7410.609: 54.4271% ( 518) 00:07:56.145 7410.609 - 7461.022: 57.3864% ( 500) 00:07:56.145 7461.022 - 7511.434: 60.1681% ( 470) 00:07:56.145 7511.434 - 7561.846: 62.8551% ( 454) 00:07:56.145 7561.846 - 7612.258: 65.3468% ( 421) 00:07:56.145 7612.258 - 7662.671: 67.9096% ( 433) 00:07:56.145 7662.671 - 7713.083: 70.3066% ( 405) 00:07:56.145 7713.083 - 7763.495: 72.4669% ( 365) 00:07:56.145 7763.495 - 7813.908: 74.5147% ( 346) 00:07:56.145 7813.908 - 7864.320: 76.3258% ( 306) 00:07:56.145 7864.320 - 7914.732: 77.9593% ( 276) 00:07:56.145 7914.732 - 7965.145: 79.3975% ( 243) 00:07:56.145 7965.145 - 8015.557: 80.6996% ( 220) 00:07:56.145 8015.557 - 8065.969: 81.8655% ( 197) 00:07:56.145 8065.969 - 8116.382: 82.9368% ( 181) 00:07:56.145 8116.382 - 8166.794: 83.9548% ( 172) 00:07:56.145 8166.794 - 8217.206: 84.9313% ( 165) 00:07:56.145 8217.206 - 8267.618: 85.8132% ( 149) 00:07:56.145 8267.618 - 8318.031: 86.7188% ( 153) 00:07:56.145 8318.031 - 8368.443: 87.5237% ( 136) 00:07:56.145 8368.443 - 8418.855: 88.2102% ( 116) 00:07:56.145 8418.855 - 8469.268: 88.8790% ( 113) 00:07:56.145 8469.268 - 8519.680: 89.4413% ( 95) 00:07:56.145 8519.680 - 8570.092: 89.9621% ( 88) 00:07:56.145 8570.092 - 8620.505: 90.4179% ( 77) 00:07:56.145 8620.505 - 8670.917: 90.8440% ( 72) 00:07:56.145 8670.917 - 8721.329: 91.1932% ( 59) 00:07:56.145 8721.329 - 8771.742: 91.5365% ( 58) 00:07:56.145 8771.742 - 8822.154: 91.8265% ( 49) 00:07:56.145 8822.154 - 8872.566: 92.0810% ( 43) 00:07:56.145 8872.566 - 8922.978: 92.3059% ( 38) 00:07:56.145 8922.978 - 8973.391: 92.4893% ( 31) 00:07:56.145 8973.391 - 9023.803: 92.6255% ( 23) 00:07:56.145 9023.803 - 9074.215: 92.7675% ( 24) 00:07:56.145 9074.215 - 9124.628: 92.9096% ( 24) 00:07:56.145 9124.628 - 9175.040: 93.0398% ( 22) 00:07:56.145 9175.040 - 9225.452: 93.1818% ( 24) 00:07:56.145 9225.452 - 9275.865: 93.3239% ( 24) 00:07:56.145 9275.865 - 9326.277: 93.4659% ( 24) 00:07:56.145 9326.277 - 9376.689: 93.5902% ( 21) 00:07:56.145 9376.689 - 9427.102: 93.7145% ( 21) 00:07:56.145 9427.102 - 9477.514: 93.8033% ( 15) 00:07:56.145 9477.514 - 9527.926: 93.9098% ( 18) 00:07:56.145 9527.926 - 9578.338: 94.0223% ( 19) 00:07:56.145 9578.338 - 9628.751: 94.1229% ( 17) 00:07:56.145 9628.751 - 9679.163: 94.2176% ( 16) 00:07:56.145 9679.163 - 9729.575: 94.3300% ( 19) 00:07:56.145 9729.575 - 9779.988: 94.4129% ( 14) 00:07:56.145 9779.988 - 9830.400: 94.5194% ( 18) 00:07:56.145 9830.400 - 9880.812: 94.6259% ( 18) 00:07:56.145 9880.812 - 9931.225: 94.7443% ( 20) 00:07:56.145 9931.225 - 9981.637: 94.8390% ( 16) 00:07:56.145 9981.637 - 10032.049: 94.9455% ( 18) 00:07:56.145 10032.049 - 10082.462: 95.0521% ( 18) 00:07:56.145 10082.462 - 10132.874: 95.1409% ( 15) 00:07:56.145 10132.874 - 10183.286: 95.2356% ( 16) 00:07:56.145 10183.286 - 10233.698: 95.3303% ( 16) 00:07:56.145 10233.698 - 10284.111: 95.4190% ( 15) 00:07:56.145 10284.111 - 10334.523: 95.5137% ( 16) 00:07:56.145 10334.523 - 10384.935: 95.5848% ( 12) 00:07:56.145 10384.935 - 10435.348: 95.6499% ( 11) 00:07:56.145 10435.348 - 10485.760: 95.6913% ( 7) 00:07:56.145 10485.760 - 10536.172: 95.7505% ( 10) 00:07:56.145 10536.172 - 10586.585: 95.8333% ( 14) 00:07:56.145 10586.585 - 10636.997: 95.9162% ( 14) 00:07:56.145 10636.997 - 10687.409: 95.9931% ( 13) 00:07:56.145 10687.409 - 10737.822: 96.0701% ( 13) 00:07:56.145 10737.822 - 10788.234: 96.1589% ( 15) 00:07:56.145 10788.234 - 10838.646: 96.2417% ( 14) 00:07:56.145 10838.646 - 10889.058: 96.3305% ( 15) 00:07:56.145 10889.058 - 10939.471: 96.4134% ( 14) 00:07:56.145 10939.471 - 10989.883: 96.5080% ( 16) 00:07:56.145 10989.883 - 11040.295: 96.5968% ( 15) 00:07:56.145 11040.295 - 11090.708: 96.6856% ( 15) 00:07:56.145 11090.708 - 11141.120: 96.7862% ( 17) 00:07:56.145 11141.120 - 11191.532: 96.8868% ( 17) 00:07:56.145 11191.532 - 11241.945: 96.9756% ( 15) 00:07:56.145 11241.945 - 11292.357: 97.0644% ( 15) 00:07:56.145 11292.357 - 11342.769: 97.1354% ( 12) 00:07:56.145 11342.769 - 11393.182: 97.2183% ( 14) 00:07:56.145 11393.182 - 11443.594: 97.2715% ( 9) 00:07:56.145 11443.594 - 11494.006: 97.3366% ( 11) 00:07:56.145 11494.006 - 11544.418: 97.3840% ( 8) 00:07:56.145 11544.418 - 11594.831: 97.4669% ( 14) 00:07:56.145 11594.831 - 11645.243: 97.5379% ( 12) 00:07:56.145 11645.243 - 11695.655: 97.6207% ( 14) 00:07:56.145 11695.655 - 11746.068: 97.7095% ( 15) 00:07:56.145 11746.068 - 11796.480: 97.7746% ( 11) 00:07:56.145 11796.480 - 11846.892: 97.8752% ( 17) 00:07:56.145 11846.892 - 11897.305: 97.9818% ( 18) 00:07:56.145 11897.305 - 11947.717: 98.0705% ( 15) 00:07:56.145 11947.717 - 11998.129: 98.1652% ( 16) 00:07:56.145 11998.129 - 12048.542: 98.2599% ( 16) 00:07:56.145 12048.542 - 12098.954: 98.3369% ( 13) 00:07:56.145 12098.954 - 12149.366: 98.4020% ( 11) 00:07:56.145 12149.366 - 12199.778: 98.4671% ( 11) 00:07:56.145 12199.778 - 12250.191: 98.5322% ( 11) 00:07:56.145 12250.191 - 12300.603: 98.6032% ( 12) 00:07:56.145 12300.603 - 12351.015: 98.6683% ( 11) 00:07:56.145 12351.015 - 12401.428: 98.7334% ( 11) 00:07:56.145 12401.428 - 12451.840: 98.7926% ( 10) 00:07:56.145 12451.840 - 12502.252: 98.8696% ( 13) 00:07:56.145 12502.252 - 12552.665: 98.9465% ( 13) 00:07:56.145 12552.665 - 12603.077: 99.0116% ( 11) 00:07:56.145 12603.077 - 12653.489: 99.0767% ( 11) 00:07:56.145 12653.489 - 12703.902: 99.1241% ( 8) 00:07:56.145 12703.902 - 12754.314: 99.1359% ( 2) 00:07:56.145 12754.314 - 12804.726: 99.1477% ( 2) 00:07:56.145 12804.726 - 12855.138: 99.1536% ( 1) 00:07:56.145 12855.138 - 12905.551: 99.1655% ( 2) 00:07:56.145 12905.551 - 13006.375: 99.1892% ( 4) 00:07:56.145 13006.375 - 13107.200: 99.2069% ( 3) 00:07:56.145 13107.200 - 13208.025: 99.2306% ( 4) 00:07:56.145 13208.025 - 13308.849: 99.2424% ( 2) 00:07:56.145 16131.938 - 16232.763: 99.2602% ( 3) 00:07:56.145 16232.763 - 16333.588: 99.2898% ( 5) 00:07:56.145 16333.588 - 16434.412: 99.3075% ( 3) 00:07:56.145 16434.412 - 16535.237: 99.3253% ( 3) 00:07:56.145 16535.237 - 16636.062: 99.3490% ( 4) 00:07:56.145 16636.062 - 16736.886: 99.3786% ( 5) 00:07:56.145 16736.886 - 16837.711: 99.4081% ( 5) 00:07:56.145 16837.711 - 16938.535: 99.4259% ( 3) 00:07:56.145 16938.535 - 17039.360: 99.4496% ( 4) 00:07:56.145 17039.360 - 17140.185: 99.4732% ( 4) 00:07:56.145 17140.185 - 17241.009: 99.4910% ( 3) 00:07:56.145 17241.009 - 17341.834: 99.5147% ( 4) 00:07:56.145 17341.834 - 17442.658: 99.5384% ( 4) 00:07:56.145 17442.658 - 17543.483: 99.5620% ( 4) 00:07:56.145 17543.483 - 17644.308: 99.5798% ( 3) 00:07:56.145 17644.308 - 17745.132: 99.6035% ( 4) 00:07:56.145 17745.132 - 17845.957: 99.6212% ( 3) 00:07:56.145 21576.468 - 21677.292: 99.6390% ( 3) 00:07:56.145 21677.292 - 21778.117: 99.6626% ( 4) 00:07:56.145 21778.117 - 21878.942: 99.6804% ( 3) 00:07:56.145 21878.942 - 21979.766: 99.7041% ( 4) 00:07:56.145 21979.766 - 22080.591: 99.7218% ( 3) 00:07:56.145 22080.591 - 22181.415: 99.7455% ( 4) 00:07:56.145 22181.415 - 22282.240: 99.7692% ( 4) 00:07:56.146 22282.240 - 22383.065: 99.7869% ( 3) 00:07:56.146 22383.065 - 22483.889: 99.8106% ( 4) 00:07:56.146 22483.889 - 22584.714: 99.8343% ( 4) 00:07:56.146 22584.714 - 22685.538: 99.8580% ( 4) 00:07:56.146 22685.538 - 22786.363: 99.8757% ( 3) 00:07:56.146 22786.363 - 22887.188: 99.8994% ( 4) 00:07:56.146 22887.188 - 22988.012: 99.9231% ( 4) 00:07:56.146 22988.012 - 23088.837: 99.9408% ( 3) 00:07:56.146 23088.837 - 23189.662: 99.9645% ( 4) 00:07:56.146 23189.662 - 23290.486: 99.9882% ( 4) 00:07:56.146 23290.486 - 23391.311: 100.0000% ( 2) 00:07:56.146 00:07:56.146 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:56.146 ============================================================================== 00:07:56.146 Range in us Cumulative IO count 00:07:56.146 5469.735 - 5494.942: 0.0059% ( 1) 00:07:56.146 5494.942 - 5520.148: 0.0414% ( 6) 00:07:56.146 5520.148 - 5545.354: 0.1006% ( 10) 00:07:56.146 5545.354 - 5570.560: 0.1539% ( 9) 00:07:56.146 5570.560 - 5595.766: 0.1953% ( 7) 00:07:56.146 5595.766 - 5620.972: 0.2308% ( 6) 00:07:56.146 5620.972 - 5646.178: 0.2959% ( 11) 00:07:56.146 5646.178 - 5671.385: 0.3847% ( 15) 00:07:56.146 5671.385 - 5696.591: 0.5031% ( 20) 00:07:56.146 5696.591 - 5721.797: 0.6451% ( 24) 00:07:56.146 5721.797 - 5747.003: 0.8049% ( 27) 00:07:56.146 5747.003 - 5772.209: 1.0062% ( 34) 00:07:56.146 5772.209 - 5797.415: 1.1660% ( 27) 00:07:56.146 5797.415 - 5822.622: 1.3613% ( 33) 00:07:56.146 5822.622 - 5847.828: 1.6098% ( 42) 00:07:56.146 5847.828 - 5873.034: 1.8762% ( 45) 00:07:56.146 5873.034 - 5898.240: 2.1248% ( 42) 00:07:56.146 5898.240 - 5923.446: 2.4089% ( 48) 00:07:56.146 5923.446 - 5948.652: 2.7048% ( 50) 00:07:56.146 5948.652 - 5973.858: 3.0717% ( 62) 00:07:56.146 5973.858 - 5999.065: 3.4032% ( 56) 00:07:56.146 5999.065 - 6024.271: 3.7228% ( 54) 00:07:56.146 6024.271 - 6049.477: 4.0187% ( 50) 00:07:56.146 6049.477 - 6074.683: 4.3738% ( 60) 00:07:56.146 6074.683 - 6099.889: 4.7053% ( 56) 00:07:56.146 6099.889 - 6125.095: 5.0840% ( 64) 00:07:56.146 6125.095 - 6150.302: 5.4569% ( 63) 00:07:56.146 6150.302 - 6175.508: 5.8475% ( 66) 00:07:56.146 6175.508 - 6200.714: 6.2500% ( 68) 00:07:56.146 6200.714 - 6225.920: 6.6880% ( 74) 00:07:56.146 6225.920 - 6251.126: 7.1378% ( 76) 00:07:56.146 6251.126 - 6276.332: 7.6527% ( 87) 00:07:56.146 6276.332 - 6301.538: 8.2623% ( 103) 00:07:56.146 6301.538 - 6326.745: 8.8423% ( 98) 00:07:56.146 6326.745 - 6351.951: 9.3928% ( 93) 00:07:56.146 6351.951 - 6377.157: 9.9077% ( 87) 00:07:56.146 6377.157 - 6402.363: 10.4522% ( 92) 00:07:56.146 6402.363 - 6427.569: 11.0618% ( 103) 00:07:56.146 6427.569 - 6452.775: 11.7424% ( 115) 00:07:56.146 6452.775 - 6503.188: 13.0149% ( 215) 00:07:56.146 6503.188 - 6553.600: 14.3999% ( 234) 00:07:56.146 6553.600 - 6604.012: 15.8203% ( 240) 00:07:56.146 6604.012 - 6654.425: 17.3000% ( 250) 00:07:56.146 6654.425 - 6704.837: 18.8684% ( 265) 00:07:56.146 6704.837 - 6755.249: 20.5729% ( 288) 00:07:56.146 6755.249 - 6805.662: 22.3366% ( 298) 00:07:56.146 6805.662 - 6856.074: 24.2010% ( 315) 00:07:56.146 6856.074 - 6906.486: 26.1778% ( 334) 00:07:56.146 6906.486 - 6956.898: 28.4505% ( 384) 00:07:56.146 6956.898 - 7007.311: 30.8357% ( 403) 00:07:56.146 7007.311 - 7057.723: 33.4695% ( 445) 00:07:56.146 7057.723 - 7108.135: 36.3636% ( 489) 00:07:56.146 7108.135 - 7158.548: 39.2045% ( 480) 00:07:56.146 7158.548 - 7208.960: 42.1283% ( 494) 00:07:56.146 7208.960 - 7259.372: 45.3066% ( 537) 00:07:56.146 7259.372 - 7309.785: 48.4730% ( 535) 00:07:56.146 7309.785 - 7360.197: 51.4086% ( 496) 00:07:56.146 7360.197 - 7410.609: 54.3265% ( 493) 00:07:56.146 7410.609 - 7461.022: 57.2325% ( 491) 00:07:56.146 7461.022 - 7511.434: 60.0616% ( 478) 00:07:56.146 7511.434 - 7561.846: 62.7427% ( 453) 00:07:56.146 7561.846 - 7612.258: 65.3232% ( 436) 00:07:56.146 7612.258 - 7662.671: 67.8918% ( 434) 00:07:56.146 7662.671 - 7713.083: 70.2356% ( 396) 00:07:56.146 7713.083 - 7763.495: 72.5320% ( 388) 00:07:56.146 7763.495 - 7813.908: 74.4910% ( 331) 00:07:56.146 7813.908 - 7864.320: 76.2725% ( 301) 00:07:56.146 7864.320 - 7914.732: 77.9179% ( 278) 00:07:56.146 7914.732 - 7965.145: 79.3679% ( 245) 00:07:56.146 7965.145 - 8015.557: 80.6818% ( 222) 00:07:56.146 8015.557 - 8065.969: 81.8655% ( 200) 00:07:56.146 8065.969 - 8116.382: 82.9664% ( 186) 00:07:56.146 8116.382 - 8166.794: 83.9725% ( 170) 00:07:56.146 8166.794 - 8217.206: 84.9846% ( 171) 00:07:56.146 8217.206 - 8267.618: 85.9316% ( 160) 00:07:56.146 8267.618 - 8318.031: 86.8726% ( 159) 00:07:56.146 8318.031 - 8368.443: 87.5651% ( 117) 00:07:56.146 8368.443 - 8418.855: 88.1984% ( 107) 00:07:56.146 8418.855 - 8469.268: 88.7429% ( 92) 00:07:56.146 8469.268 - 8519.680: 89.2874% ( 92) 00:07:56.146 8519.680 - 8570.092: 89.7609% ( 80) 00:07:56.146 8570.092 - 8620.505: 90.1752% ( 70) 00:07:56.146 8620.505 - 8670.917: 90.6309% ( 77) 00:07:56.146 8670.917 - 8721.329: 91.0275% ( 67) 00:07:56.146 8721.329 - 8771.742: 91.3471% ( 54) 00:07:56.146 8771.742 - 8822.154: 91.6075% ( 44) 00:07:56.146 8822.154 - 8872.566: 91.8501% ( 41) 00:07:56.146 8872.566 - 8922.978: 92.0573% ( 35) 00:07:56.146 8922.978 - 8973.391: 92.2822% ( 38) 00:07:56.146 8973.391 - 9023.803: 92.4657% ( 31) 00:07:56.146 9023.803 - 9074.215: 92.6551% ( 32) 00:07:56.146 9074.215 - 9124.628: 92.8089% ( 26) 00:07:56.146 9124.628 - 9175.040: 92.9451% ( 23) 00:07:56.146 9175.040 - 9225.452: 93.0930% ( 25) 00:07:56.146 9225.452 - 9275.865: 93.2469% ( 26) 00:07:56.146 9275.865 - 9326.277: 93.3830% ( 23) 00:07:56.146 9326.277 - 9376.689: 93.5251% ( 24) 00:07:56.146 9376.689 - 9427.102: 93.6790% ( 26) 00:07:56.146 9427.102 - 9477.514: 93.8151% ( 23) 00:07:56.146 9477.514 - 9527.926: 93.9216% ( 18) 00:07:56.146 9527.926 - 9578.338: 94.0459% ( 21) 00:07:56.146 9578.338 - 9628.751: 94.1584% ( 19) 00:07:56.146 9628.751 - 9679.163: 94.2708% ( 19) 00:07:56.146 9679.163 - 9729.575: 94.3774% ( 18) 00:07:56.146 9729.575 - 9779.988: 94.4898% ( 19) 00:07:56.146 9779.988 - 9830.400: 94.5786% ( 15) 00:07:56.146 9830.400 - 9880.812: 94.6674% ( 15) 00:07:56.146 9880.812 - 9931.225: 94.7502% ( 14) 00:07:56.146 9931.225 - 9981.637: 94.8449% ( 16) 00:07:56.146 9981.637 - 10032.049: 94.9160% ( 12) 00:07:56.146 10032.049 - 10082.462: 94.9811% ( 11) 00:07:56.146 10082.462 - 10132.874: 95.0580% ( 13) 00:07:56.146 10132.874 - 10183.286: 95.1409% ( 14) 00:07:56.146 10183.286 - 10233.698: 95.1882% ( 8) 00:07:56.146 10233.698 - 10284.111: 95.2474% ( 10) 00:07:56.146 10284.111 - 10334.523: 95.3066% ( 10) 00:07:56.146 10334.523 - 10384.935: 95.3658% ( 10) 00:07:56.146 10384.935 - 10435.348: 95.4250% ( 10) 00:07:56.146 10435.348 - 10485.760: 95.5256% ( 17) 00:07:56.146 10485.760 - 10536.172: 95.6143% ( 15) 00:07:56.146 10536.172 - 10586.585: 95.6735% ( 10) 00:07:56.146 10586.585 - 10636.997: 95.7505% ( 13) 00:07:56.146 10636.997 - 10687.409: 95.8097% ( 10) 00:07:56.146 10687.409 - 10737.822: 95.8866% ( 13) 00:07:56.146 10737.822 - 10788.234: 95.9695% ( 14) 00:07:56.146 10788.234 - 10838.646: 96.0464% ( 13) 00:07:56.146 10838.646 - 10889.058: 96.1411% ( 16) 00:07:56.146 10889.058 - 10939.471: 96.2654% ( 21) 00:07:56.146 10939.471 - 10989.883: 96.3719% ( 18) 00:07:56.146 10989.883 - 11040.295: 96.4785% ( 18) 00:07:56.146 11040.295 - 11090.708: 96.6027% ( 21) 00:07:56.146 11090.708 - 11141.120: 96.7093% ( 18) 00:07:56.146 11141.120 - 11191.532: 96.8454% ( 23) 00:07:56.146 11191.532 - 11241.945: 96.9934% ( 25) 00:07:56.146 11241.945 - 11292.357: 97.1117% ( 20) 00:07:56.146 11292.357 - 11342.769: 97.2124% ( 17) 00:07:56.146 11342.769 - 11393.182: 97.3130% ( 17) 00:07:56.146 11393.182 - 11443.594: 97.4077% ( 16) 00:07:56.146 11443.594 - 11494.006: 97.4964% ( 15) 00:07:56.146 11494.006 - 11544.418: 97.5734% ( 13) 00:07:56.146 11544.418 - 11594.831: 97.6503% ( 13) 00:07:56.146 11594.831 - 11645.243: 97.7509% ( 17) 00:07:56.146 11645.243 - 11695.655: 97.8575% ( 18) 00:07:56.146 11695.655 - 11746.068: 97.9640% ( 18) 00:07:56.146 11746.068 - 11796.480: 98.0765% ( 19) 00:07:56.146 11796.480 - 11846.892: 98.1830% ( 18) 00:07:56.146 11846.892 - 11897.305: 98.2895% ( 18) 00:07:56.146 11897.305 - 11947.717: 98.3902% ( 17) 00:07:56.146 11947.717 - 11998.129: 98.4789% ( 15) 00:07:56.146 11998.129 - 12048.542: 98.5500% ( 12) 00:07:56.146 12048.542 - 12098.954: 98.6151% ( 11) 00:07:56.146 12098.954 - 12149.366: 98.6861% ( 12) 00:07:56.146 12149.366 - 12199.778: 98.7453% ( 10) 00:07:56.146 12199.778 - 12250.191: 98.8163% ( 12) 00:07:56.146 12250.191 - 12300.603: 98.8636% ( 8) 00:07:56.146 12300.603 - 12351.015: 98.9169% ( 9) 00:07:56.146 12351.015 - 12401.428: 98.9761% ( 10) 00:07:56.146 12401.428 - 12451.840: 99.0294% ( 9) 00:07:56.146 12451.840 - 12502.252: 99.0589% ( 5) 00:07:56.146 12502.252 - 12552.665: 99.0945% ( 6) 00:07:56.146 12552.665 - 12603.077: 99.1241% ( 5) 00:07:56.146 12603.077 - 12653.489: 99.1596% ( 6) 00:07:56.146 12653.489 - 12703.902: 99.1892% ( 5) 00:07:56.146 12703.902 - 12754.314: 99.2010% ( 2) 00:07:56.146 12754.314 - 12804.726: 99.2128% ( 2) 00:07:56.146 12804.726 - 12855.138: 99.2247% ( 2) 00:07:56.146 12855.138 - 12905.551: 99.2365% ( 2) 00:07:56.146 12905.551 - 13006.375: 99.2424% ( 1) 00:07:56.146 14417.920 - 14518.745: 99.2483% ( 1) 00:07:56.146 14518.745 - 14619.569: 99.2661% ( 3) 00:07:56.146 14619.569 - 14720.394: 99.2839% ( 3) 00:07:56.147 14720.394 - 14821.218: 99.3075% ( 4) 00:07:56.147 14821.218 - 14922.043: 99.3371% ( 5) 00:07:56.147 14922.043 - 15022.868: 99.3608% ( 4) 00:07:56.147 15022.868 - 15123.692: 99.3845% ( 4) 00:07:56.147 15123.692 - 15224.517: 99.4022% ( 3) 00:07:56.147 15224.517 - 15325.342: 99.4259% ( 4) 00:07:56.147 15325.342 - 15426.166: 99.4496% ( 4) 00:07:56.147 15426.166 - 15526.991: 99.4673% ( 3) 00:07:56.147 15526.991 - 15627.815: 99.4910% ( 4) 00:07:56.147 15627.815 - 15728.640: 99.5147% ( 4) 00:07:56.147 15728.640 - 15829.465: 99.5324% ( 3) 00:07:56.147 15829.465 - 15930.289: 99.5561% ( 4) 00:07:56.147 15930.289 - 16031.114: 99.5739% ( 3) 00:07:56.147 16031.114 - 16131.938: 99.5975% ( 4) 00:07:56.147 16131.938 - 16232.763: 99.6212% ( 4) 00:07:56.147 19963.274 - 20064.098: 99.6390% ( 3) 00:07:56.147 20064.098 - 20164.923: 99.6626% ( 4) 00:07:56.147 20164.923 - 20265.748: 99.6863% ( 4) 00:07:56.147 20265.748 - 20366.572: 99.7041% ( 3) 00:07:56.147 20366.572 - 20467.397: 99.7277% ( 4) 00:07:56.147 20467.397 - 20568.222: 99.7455% ( 3) 00:07:56.147 20568.222 - 20669.046: 99.7692% ( 4) 00:07:56.147 20669.046 - 20769.871: 99.7929% ( 4) 00:07:56.147 20769.871 - 20870.695: 99.8165% ( 4) 00:07:56.147 20870.695 - 20971.520: 99.8343% ( 3) 00:07:56.147 20971.520 - 21072.345: 99.8520% ( 3) 00:07:56.147 21072.345 - 21173.169: 99.8757% ( 4) 00:07:56.147 21173.169 - 21273.994: 99.8994% ( 4) 00:07:56.147 21273.994 - 21374.818: 99.9231% ( 4) 00:07:56.147 21374.818 - 21475.643: 99.9467% ( 4) 00:07:56.147 21475.643 - 21576.468: 99.9645% ( 3) 00:07:56.147 21576.468 - 21677.292: 99.9882% ( 4) 00:07:56.147 21677.292 - 21778.117: 100.0000% ( 2) 00:07:56.147 00:07:56.147 02:18:43 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:57.518 Initializing NVMe Controllers 00:07:57.518 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:57.518 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:57.518 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:57.518 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:57.518 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:57.518 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:57.518 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:57.518 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:57.518 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:57.518 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:57.518 Initialization complete. Launching workers. 00:07:57.518 ======================================================== 00:07:57.518 Latency(us) 00:07:57.518 Device Information : IOPS MiB/s Average min max 00:07:57.518 PCIE (0000:00:10.0) NSID 1 from core 0: 15233.38 178.52 8413.42 6113.62 27615.05 00:07:57.518 PCIE (0000:00:11.0) NSID 1 from core 0: 15233.38 178.52 8400.70 6376.35 24828.13 00:07:57.518 PCIE (0000:00:13.0) NSID 1 from core 0: 15233.38 178.52 8387.50 5929.32 22968.11 00:07:57.518 PCIE (0000:00:12.0) NSID 1 from core 0: 15233.38 178.52 8374.18 6095.02 21869.68 00:07:57.518 PCIE (0000:00:12.0) NSID 2 from core 0: 15233.38 178.52 8361.03 6116.80 20376.74 00:07:57.518 PCIE (0000:00:12.0) NSID 3 from core 0: 15233.38 178.52 8347.87 6255.65 20002.85 00:07:57.518 ======================================================== 00:07:57.518 Total : 91400.25 1071.10 8380.78 5929.32 27615.05 00:07:57.518 00:07:57.518 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:57.518 ================================================================================= 00:07:57.518 1.00000% : 6604.012us 00:07:57.518 10.00000% : 7158.548us 00:07:57.518 25.00000% : 7511.434us 00:07:57.518 50.00000% : 8116.382us 00:07:57.518 75.00000% : 8872.566us 00:07:57.518 90.00000% : 9779.988us 00:07:57.518 95.00000% : 10536.172us 00:07:57.518 98.00000% : 12552.665us 00:07:57.518 99.00000% : 13611.323us 00:07:57.518 99.50000% : 19559.975us 00:07:57.518 99.90000% : 27222.646us 00:07:57.518 99.99000% : 27625.945us 00:07:57.518 99.99900% : 27625.945us 00:07:57.518 99.99990% : 27625.945us 00:07:57.518 99.99999% : 27625.945us 00:07:57.518 00:07:57.518 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:57.518 ================================================================================= 00:07:57.518 1.00000% : 6704.837us 00:07:57.518 10.00000% : 7259.372us 00:07:57.518 25.00000% : 7561.846us 00:07:57.518 50.00000% : 8065.969us 00:07:57.518 75.00000% : 8872.566us 00:07:57.518 90.00000% : 9779.988us 00:07:57.518 95.00000% : 10334.523us 00:07:57.518 98.00000% : 13107.200us 00:07:57.518 99.00000% : 13611.323us 00:07:57.518 99.50000% : 20164.923us 00:07:57.518 99.90000% : 24500.382us 00:07:57.518 99.99000% : 24802.855us 00:07:57.518 99.99900% : 24903.680us 00:07:57.518 99.99990% : 24903.680us 00:07:57.518 99.99999% : 24903.680us 00:07:57.518 00:07:57.518 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:57.518 ================================================================================= 00:07:57.518 1.00000% : 6553.600us 00:07:57.518 10.00000% : 7259.372us 00:07:57.518 25.00000% : 7561.846us 00:07:57.518 50.00000% : 8065.969us 00:07:57.518 75.00000% : 8872.566us 00:07:57.518 90.00000% : 9729.575us 00:07:57.518 95.00000% : 10334.523us 00:07:57.518 98.00000% : 12603.077us 00:07:57.518 99.00000% : 13611.323us 00:07:57.518 99.50000% : 19559.975us 00:07:57.518 99.90000% : 22887.188us 00:07:57.518 99.99000% : 22988.012us 00:07:57.518 99.99900% : 22988.012us 00:07:57.518 99.99990% : 22988.012us 00:07:57.518 99.99999% : 22988.012us 00:07:57.518 00:07:57.518 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:57.519 ================================================================================= 00:07:57.519 1.00000% : 6553.600us 00:07:57.519 10.00000% : 7259.372us 00:07:57.519 25.00000% : 7561.846us 00:07:57.519 50.00000% : 8065.969us 00:07:57.519 75.00000% : 8872.566us 00:07:57.519 90.00000% : 9628.751us 00:07:57.519 95.00000% : 10233.698us 00:07:57.519 98.00000% : 12653.489us 00:07:57.519 99.00000% : 13409.674us 00:07:57.519 99.50000% : 17845.957us 00:07:57.519 99.90000% : 21374.818us 00:07:57.519 99.99000% : 21878.942us 00:07:57.519 99.99900% : 21878.942us 00:07:57.519 99.99990% : 21878.942us 00:07:57.519 99.99999% : 21878.942us 00:07:57.519 00:07:57.519 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:57.519 ================================================================================= 00:07:57.519 1.00000% : 6553.600us 00:07:57.519 10.00000% : 7259.372us 00:07:57.519 25.00000% : 7561.846us 00:07:57.519 50.00000% : 8065.969us 00:07:57.519 75.00000% : 8872.566us 00:07:57.519 90.00000% : 9679.163us 00:07:57.519 95.00000% : 10334.523us 00:07:57.519 98.00000% : 12401.428us 00:07:57.519 99.00000% : 13712.148us 00:07:57.519 99.50000% : 16333.588us 00:07:57.519 99.90000% : 19963.274us 00:07:57.519 99.99000% : 20366.572us 00:07:57.519 99.99900% : 20467.397us 00:07:57.519 99.99990% : 20467.397us 00:07:57.519 99.99999% : 20467.397us 00:07:57.519 00:07:57.519 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:57.519 ================================================================================= 00:07:57.519 1.00000% : 6604.012us 00:07:57.519 10.00000% : 7259.372us 00:07:57.519 25.00000% : 7561.846us 00:07:57.519 50.00000% : 8015.557us 00:07:57.519 75.00000% : 8872.566us 00:07:57.519 90.00000% : 9729.575us 00:07:57.519 95.00000% : 10435.348us 00:07:57.519 98.00000% : 12401.428us 00:07:57.519 99.00000% : 13510.498us 00:07:57.519 99.50000% : 14216.271us 00:07:57.519 99.90000% : 19660.800us 00:07:57.519 99.99000% : 20064.098us 00:07:57.519 99.99900% : 20064.098us 00:07:57.519 99.99990% : 20064.098us 00:07:57.519 99.99999% : 20064.098us 00:07:57.519 00:07:57.519 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:57.519 ============================================================================== 00:07:57.519 Range in us Cumulative IO count 00:07:57.519 6099.889 - 6125.095: 0.0131% ( 2) 00:07:57.519 6150.302 - 6175.508: 0.0327% ( 3) 00:07:57.519 6175.508 - 6200.714: 0.0654% ( 5) 00:07:57.519 6200.714 - 6225.920: 0.0981% ( 5) 00:07:57.519 6225.920 - 6251.126: 0.1504% ( 8) 00:07:57.519 6251.126 - 6276.332: 0.1831% ( 5) 00:07:57.519 6276.332 - 6301.538: 0.2157% ( 5) 00:07:57.519 6301.538 - 6326.745: 0.2288% ( 2) 00:07:57.519 6326.745 - 6351.951: 0.2419% ( 2) 00:07:57.519 6351.951 - 6377.157: 0.2746% ( 5) 00:07:57.519 6377.157 - 6402.363: 0.3007% ( 4) 00:07:57.519 6402.363 - 6427.569: 0.3530% ( 8) 00:07:57.519 6427.569 - 6452.775: 0.4446% ( 14) 00:07:57.519 6452.775 - 6503.188: 0.6603% ( 33) 00:07:57.519 6503.188 - 6553.600: 0.8107% ( 23) 00:07:57.519 6553.600 - 6604.012: 1.0460% ( 36) 00:07:57.519 6604.012 - 6654.425: 1.3533% ( 47) 00:07:57.519 6654.425 - 6704.837: 1.7717% ( 64) 00:07:57.519 6704.837 - 6755.249: 2.1967% ( 65) 00:07:57.519 6755.249 - 6805.662: 2.8635% ( 102) 00:07:57.519 6805.662 - 6856.074: 3.5369% ( 103) 00:07:57.519 6856.074 - 6906.486: 4.3606% ( 126) 00:07:57.519 6906.486 - 6956.898: 5.5635% ( 184) 00:07:57.519 6956.898 - 7007.311: 6.4265% ( 132) 00:07:57.519 7007.311 - 7057.723: 7.3810% ( 146) 00:07:57.519 7057.723 - 7108.135: 8.6101% ( 188) 00:07:57.519 7108.135 - 7158.548: 10.0745% ( 224) 00:07:57.519 7158.548 - 7208.960: 12.1470% ( 317) 00:07:57.519 7208.960 - 7259.372: 14.5071% ( 361) 00:07:57.519 7259.372 - 7309.785: 16.6645% ( 330) 00:07:57.519 7309.785 - 7360.197: 18.7631% ( 321) 00:07:57.519 7360.197 - 7410.609: 20.5871% ( 279) 00:07:57.519 7410.609 - 7461.022: 23.2021% ( 400) 00:07:57.519 7461.022 - 7511.434: 25.7388% ( 388) 00:07:57.519 7511.434 - 7561.846: 28.0139% ( 348) 00:07:57.519 7561.846 - 7612.258: 30.0536% ( 312) 00:07:57.519 7612.258 - 7662.671: 32.0934% ( 312) 00:07:57.519 7662.671 - 7713.083: 34.2835% ( 335) 00:07:57.519 7713.083 - 7763.495: 36.3690% ( 319) 00:07:57.519 7763.495 - 7813.908: 38.7225% ( 360) 00:07:57.519 7813.908 - 7864.320: 40.9650% ( 343) 00:07:57.519 7864.320 - 7914.732: 43.2662% ( 352) 00:07:57.519 7914.732 - 7965.145: 45.8355% ( 393) 00:07:57.519 7965.145 - 8015.557: 47.9079% ( 317) 00:07:57.519 8015.557 - 8065.969: 49.9608% ( 314) 00:07:57.519 8065.969 - 8116.382: 51.9286% ( 301) 00:07:57.519 8116.382 - 8166.794: 54.1318% ( 337) 00:07:57.519 8166.794 - 8217.206: 55.9035% ( 271) 00:07:57.519 8217.206 - 8267.618: 57.7144% ( 277) 00:07:57.519 8267.618 - 8318.031: 59.3750% ( 254) 00:07:57.519 8318.031 - 8368.443: 60.8917% ( 232) 00:07:57.519 8368.443 - 8418.855: 62.4281% ( 235) 00:07:57.519 8418.855 - 8469.268: 64.0560% ( 249) 00:07:57.519 8469.268 - 8519.680: 65.4877% ( 219) 00:07:57.519 8519.680 - 8570.092: 66.9391% ( 222) 00:07:57.519 8570.092 - 8620.505: 68.4100% ( 225) 00:07:57.519 8620.505 - 8670.917: 69.9072% ( 229) 00:07:57.519 8670.917 - 8721.329: 71.3520% ( 221) 00:07:57.519 8721.329 - 8771.742: 72.7445% ( 213) 00:07:57.519 8771.742 - 8822.154: 73.9671% ( 187) 00:07:57.519 8822.154 - 8872.566: 75.1308% ( 178) 00:07:57.519 8872.566 - 8922.978: 76.1964% ( 163) 00:07:57.519 8922.978 - 8973.391: 77.5366% ( 205) 00:07:57.519 8973.391 - 9023.803: 78.6415% ( 169) 00:07:57.519 9023.803 - 9074.215: 79.8052% ( 178) 00:07:57.519 9074.215 - 9124.628: 80.9362% ( 173) 00:07:57.519 9124.628 - 9175.040: 81.9430% ( 154) 00:07:57.519 9175.040 - 9225.452: 82.9106% ( 148) 00:07:57.519 9225.452 - 9275.865: 83.7408% ( 127) 00:07:57.519 9275.865 - 9326.277: 84.5319% ( 121) 00:07:57.519 9326.277 - 9376.689: 85.2380% ( 108) 00:07:57.519 9376.689 - 9427.102: 85.8329% ( 91) 00:07:57.519 9427.102 - 9477.514: 86.4147% ( 89) 00:07:57.519 9477.514 - 9527.926: 87.0228% ( 93) 00:07:57.519 9527.926 - 9578.338: 87.7680% ( 114) 00:07:57.519 9578.338 - 9628.751: 88.4283% ( 101) 00:07:57.519 9628.751 - 9679.163: 89.0363% ( 93) 00:07:57.519 9679.163 - 9729.575: 89.5986% ( 86) 00:07:57.519 9729.575 - 9779.988: 90.0105% ( 63) 00:07:57.519 9779.988 - 9830.400: 90.4485% ( 67) 00:07:57.519 9830.400 - 9880.812: 90.9323% ( 74) 00:07:57.519 9880.812 - 9931.225: 91.3441% ( 63) 00:07:57.519 9931.225 - 9981.637: 91.7626% ( 64) 00:07:57.519 9981.637 - 10032.049: 92.1287% ( 56) 00:07:57.519 10032.049 - 10082.462: 92.4621% ( 51) 00:07:57.519 10082.462 - 10132.874: 92.7497% ( 44) 00:07:57.519 10132.874 - 10183.286: 93.0047% ( 39) 00:07:57.519 10183.286 - 10233.698: 93.2989% ( 45) 00:07:57.519 10233.698 - 10284.111: 93.5800% ( 43) 00:07:57.519 10284.111 - 10334.523: 93.8611% ( 43) 00:07:57.519 10334.523 - 10384.935: 94.1619% ( 46) 00:07:57.519 10384.935 - 10435.348: 94.4691% ( 47) 00:07:57.519 10435.348 - 10485.760: 94.7372% ( 41) 00:07:57.519 10485.760 - 10536.172: 95.0641% ( 50) 00:07:57.519 10536.172 - 10586.585: 95.4302% ( 56) 00:07:57.519 10586.585 - 10636.997: 95.7505% ( 49) 00:07:57.519 10636.997 - 10687.409: 95.9140% ( 25) 00:07:57.519 10687.409 - 10737.822: 96.0905% ( 27) 00:07:57.519 10737.822 - 10788.234: 96.2016% ( 17) 00:07:57.519 10788.234 - 10838.646: 96.3324% ( 20) 00:07:57.519 10838.646 - 10889.058: 96.4435% ( 17) 00:07:57.519 10889.058 - 10939.471: 96.5154% ( 11) 00:07:57.519 10939.471 - 10989.883: 96.6070% ( 14) 00:07:57.519 10989.883 - 11040.295: 96.6919% ( 13) 00:07:57.519 11040.295 - 11090.708: 96.7965% ( 16) 00:07:57.519 11090.708 - 11141.120: 96.8946% ( 15) 00:07:57.519 11141.120 - 11191.532: 97.0058% ( 17) 00:07:57.519 11191.532 - 11241.945: 97.1234% ( 18) 00:07:57.519 11241.945 - 11292.357: 97.2280% ( 16) 00:07:57.519 11292.357 - 11342.769: 97.3196% ( 14) 00:07:57.519 11342.769 - 11393.182: 97.4046% ( 13) 00:07:57.519 11393.182 - 11443.594: 97.4961% ( 14) 00:07:57.519 11443.594 - 11494.006: 97.5680% ( 11) 00:07:57.519 11494.006 - 11544.418: 97.6203% ( 8) 00:07:57.519 11544.418 - 11594.831: 97.6661% ( 7) 00:07:57.519 11594.831 - 11645.243: 97.7053% ( 6) 00:07:57.519 11645.243 - 11695.655: 97.7510% ( 7) 00:07:57.519 11695.655 - 11746.068: 97.7837% ( 5) 00:07:57.519 11746.068 - 11796.480: 97.8164% ( 5) 00:07:57.519 11796.480 - 11846.892: 97.8491% ( 5) 00:07:57.519 11846.892 - 11897.305: 97.8753% ( 4) 00:07:57.519 11897.305 - 11947.717: 97.8883% ( 2) 00:07:57.519 11947.717 - 11998.129: 97.9014% ( 2) 00:07:57.519 11998.129 - 12048.542: 97.9079% ( 1) 00:07:57.519 12199.778 - 12250.191: 97.9145% ( 1) 00:07:57.520 12300.603 - 12351.015: 97.9341% ( 3) 00:07:57.520 12351.015 - 12401.428: 97.9603% ( 4) 00:07:57.520 12401.428 - 12451.840: 97.9864% ( 4) 00:07:57.520 12451.840 - 12502.252: 97.9995% ( 2) 00:07:57.520 12502.252 - 12552.665: 98.0060% ( 1) 00:07:57.520 12552.665 - 12603.077: 98.0256% ( 3) 00:07:57.520 12603.077 - 12653.489: 98.0387% ( 2) 00:07:57.520 12653.489 - 12703.902: 98.0518% ( 2) 00:07:57.520 12703.902 - 12754.314: 98.0779% ( 4) 00:07:57.520 12754.314 - 12804.726: 98.1041% ( 4) 00:07:57.520 12804.726 - 12855.138: 98.1237% ( 3) 00:07:57.520 12855.138 - 12905.551: 98.1433% ( 3) 00:07:57.520 12905.551 - 13006.375: 98.3002% ( 24) 00:07:57.520 13006.375 - 13107.200: 98.4048% ( 16) 00:07:57.520 13107.200 - 13208.025: 98.4833% ( 12) 00:07:57.520 13208.025 - 13308.849: 98.6271% ( 22) 00:07:57.520 13308.849 - 13409.674: 98.7644% ( 21) 00:07:57.520 13409.674 - 13510.498: 98.9474% ( 28) 00:07:57.520 13510.498 - 13611.323: 99.0194% ( 11) 00:07:57.520 13611.323 - 13712.148: 99.0978% ( 12) 00:07:57.520 13712.148 - 13812.972: 99.1370% ( 6) 00:07:57.520 13812.972 - 13913.797: 99.1566% ( 3) 00:07:57.520 13913.797 - 14014.622: 99.1632% ( 1) 00:07:57.520 18450.905 - 18551.729: 99.1697% ( 1) 00:07:57.520 18854.203 - 18955.028: 99.1893% ( 3) 00:07:57.520 18955.028 - 19055.852: 99.2547% ( 10) 00:07:57.520 19055.852 - 19156.677: 99.3266% ( 11) 00:07:57.520 19156.677 - 19257.502: 99.4116% ( 13) 00:07:57.520 19257.502 - 19358.326: 99.4574% ( 7) 00:07:57.520 19358.326 - 19459.151: 99.4966% ( 6) 00:07:57.520 19459.151 - 19559.975: 99.5097% ( 2) 00:07:57.520 19559.975 - 19660.800: 99.5162% ( 1) 00:07:57.520 19660.800 - 19761.625: 99.5228% ( 1) 00:07:57.520 19862.449 - 19963.274: 99.5293% ( 1) 00:07:57.520 19963.274 - 20064.098: 99.5424% ( 2) 00:07:57.520 20064.098 - 20164.923: 99.5620% ( 3) 00:07:57.520 20164.923 - 20265.748: 99.5816% ( 3) 00:07:57.520 25710.277 - 25811.102: 99.6012% ( 3) 00:07:57.520 25811.102 - 26012.751: 99.6470% ( 7) 00:07:57.520 26012.751 - 26214.400: 99.6862% ( 6) 00:07:57.520 26214.400 - 26416.049: 99.7320% ( 7) 00:07:57.520 26416.049 - 26617.698: 99.7777% ( 7) 00:07:57.520 26617.698 - 26819.348: 99.8104% ( 5) 00:07:57.520 26819.348 - 27020.997: 99.8627% ( 8) 00:07:57.520 27020.997 - 27222.646: 99.9085% ( 7) 00:07:57.520 27222.646 - 27424.295: 99.9542% ( 7) 00:07:57.520 27424.295 - 27625.945: 100.0000% ( 7) 00:07:57.520 00:07:57.520 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:57.520 ============================================================================== 00:07:57.520 Range in us Cumulative IO count 00:07:57.520 6351.951 - 6377.157: 0.0065% ( 1) 00:07:57.520 6402.363 - 6427.569: 0.0196% ( 2) 00:07:57.520 6427.569 - 6452.775: 0.0392% ( 3) 00:07:57.520 6452.775 - 6503.188: 0.0719% ( 5) 00:07:57.520 6503.188 - 6553.600: 0.1438% ( 11) 00:07:57.520 6553.600 - 6604.012: 0.3138% ( 26) 00:07:57.520 6604.012 - 6654.425: 0.8368% ( 80) 00:07:57.520 6654.425 - 6704.837: 1.4514% ( 94) 00:07:57.520 6704.837 - 6755.249: 1.8698% ( 64) 00:07:57.520 6755.249 - 6805.662: 2.4778% ( 93) 00:07:57.520 6805.662 - 6856.074: 3.4323% ( 146) 00:07:57.520 6856.074 - 6906.486: 4.1318% ( 107) 00:07:57.520 6906.486 - 6956.898: 4.7137% ( 89) 00:07:57.520 6956.898 - 7007.311: 5.3609% ( 99) 00:07:57.520 7007.311 - 7057.723: 6.1062% ( 114) 00:07:57.520 7057.723 - 7108.135: 7.2372% ( 173) 00:07:57.520 7108.135 - 7158.548: 8.1982% ( 147) 00:07:57.520 7158.548 - 7208.960: 9.5842% ( 212) 00:07:57.520 7208.960 - 7259.372: 11.2448% ( 254) 00:07:57.520 7259.372 - 7309.785: 12.9642% ( 263) 00:07:57.520 7309.785 - 7360.197: 15.3504% ( 365) 00:07:57.520 7360.197 - 7410.609: 17.8674% ( 385) 00:07:57.520 7410.609 - 7461.022: 20.4759% ( 399) 00:07:57.520 7461.022 - 7511.434: 23.2021% ( 417) 00:07:57.520 7511.434 - 7561.846: 26.2160% ( 461) 00:07:57.520 7561.846 - 7612.258: 29.1579% ( 450) 00:07:57.520 7612.258 - 7662.671: 31.8842% ( 417) 00:07:57.520 7662.671 - 7713.083: 35.0418% ( 483) 00:07:57.520 7713.083 - 7763.495: 37.8792% ( 434) 00:07:57.520 7763.495 - 7813.908: 40.7035% ( 432) 00:07:57.520 7813.908 - 7864.320: 43.2989% ( 397) 00:07:57.520 7864.320 - 7914.732: 45.8094% ( 384) 00:07:57.520 7914.732 - 7965.145: 48.1433% ( 357) 00:07:57.520 7965.145 - 8015.557: 49.9019% ( 269) 00:07:57.520 8015.557 - 8065.969: 51.5756% ( 256) 00:07:57.520 8065.969 - 8116.382: 52.9681% ( 213) 00:07:57.520 8116.382 - 8166.794: 55.0144% ( 313) 00:07:57.520 8166.794 - 8217.206: 56.8319% ( 278) 00:07:57.520 8217.206 - 8267.618: 58.5251% ( 259) 00:07:57.520 8267.618 - 8318.031: 60.0026% ( 226) 00:07:57.520 8318.031 - 8368.443: 61.5913% ( 243) 00:07:57.520 8368.443 - 8418.855: 63.0949% ( 230) 00:07:57.520 8418.855 - 8469.268: 64.4744% ( 211) 00:07:57.520 8469.268 - 8519.680: 65.9061% ( 219) 00:07:57.520 8519.680 - 8570.092: 67.3967% ( 228) 00:07:57.520 8570.092 - 8620.505: 68.6192% ( 187) 00:07:57.520 8620.505 - 8670.917: 69.9333% ( 201) 00:07:57.520 8670.917 - 8721.329: 71.4239% ( 228) 00:07:57.520 8721.329 - 8771.742: 72.7053% ( 196) 00:07:57.520 8771.742 - 8822.154: 74.0847% ( 211) 00:07:57.520 8822.154 - 8872.566: 75.8826% ( 275) 00:07:57.520 8872.566 - 8922.978: 77.2947% ( 216) 00:07:57.520 8922.978 - 8973.391: 78.6088% ( 201) 00:07:57.520 8973.391 - 9023.803: 79.9686% ( 208) 00:07:57.520 9023.803 - 9074.215: 81.1977% ( 188) 00:07:57.520 9074.215 - 9124.628: 82.3679% ( 179) 00:07:57.520 9124.628 - 9175.040: 83.5513% ( 181) 00:07:57.520 9175.040 - 9225.452: 84.3554% ( 123) 00:07:57.520 9225.452 - 9275.865: 85.0157% ( 101) 00:07:57.520 9275.865 - 9326.277: 85.7283% ( 109) 00:07:57.520 9326.277 - 9376.689: 86.4017% ( 103) 00:07:57.520 9376.689 - 9427.102: 87.0358% ( 97) 00:07:57.520 9427.102 - 9477.514: 87.6111% ( 88) 00:07:57.520 9477.514 - 9527.926: 88.0557% ( 68) 00:07:57.520 9527.926 - 9578.338: 88.4806% ( 65) 00:07:57.520 9578.338 - 9628.751: 88.9056% ( 65) 00:07:57.520 9628.751 - 9679.163: 89.3371% ( 66) 00:07:57.520 9679.163 - 9729.575: 89.9320% ( 91) 00:07:57.520 9729.575 - 9779.988: 90.4746% ( 83) 00:07:57.520 9779.988 - 9830.400: 91.1284% ( 100) 00:07:57.520 9830.400 - 9880.812: 91.6841% ( 85) 00:07:57.520 9880.812 - 9931.225: 92.1417% ( 70) 00:07:57.520 9931.225 - 9981.637: 92.5340% ( 60) 00:07:57.520 9981.637 - 10032.049: 92.8805% ( 53) 00:07:57.520 10032.049 - 10082.462: 93.2401% ( 55) 00:07:57.520 10082.462 - 10132.874: 93.6846% ( 68) 00:07:57.520 10132.874 - 10183.286: 94.0638% ( 58) 00:07:57.520 10183.286 - 10233.698: 94.4561% ( 60) 00:07:57.520 10233.698 - 10284.111: 94.7829% ( 50) 00:07:57.520 10284.111 - 10334.523: 95.0837% ( 46) 00:07:57.520 10334.523 - 10384.935: 95.4106% ( 50) 00:07:57.520 10384.935 - 10435.348: 95.5675% ( 24) 00:07:57.520 10435.348 - 10485.760: 95.7178% ( 23) 00:07:57.520 10485.760 - 10536.172: 95.8290% ( 17) 00:07:57.520 10536.172 - 10586.585: 95.8944% ( 10) 00:07:57.520 10586.585 - 10636.997: 95.9728% ( 12) 00:07:57.520 10636.997 - 10687.409: 96.0578% ( 13) 00:07:57.520 10687.409 - 10737.822: 96.1166% ( 9) 00:07:57.520 10737.822 - 10788.234: 96.2016% ( 13) 00:07:57.520 10788.234 - 10838.646: 96.2670% ( 10) 00:07:57.520 10838.646 - 10889.058: 96.3258% ( 9) 00:07:57.520 10889.058 - 10939.471: 96.3781% ( 8) 00:07:57.520 10939.471 - 10989.883: 96.4501% ( 11) 00:07:57.520 10989.883 - 11040.295: 96.4958% ( 7) 00:07:57.520 11040.295 - 11090.708: 96.5677% ( 11) 00:07:57.520 11090.708 - 11141.120: 96.6266% ( 9) 00:07:57.520 11141.120 - 11191.532: 96.7181% ( 14) 00:07:57.520 11191.532 - 11241.945: 96.8031% ( 13) 00:07:57.520 11241.945 - 11292.357: 96.9142% ( 17) 00:07:57.520 11292.357 - 11342.769: 97.0123% ( 15) 00:07:57.520 11342.769 - 11393.182: 97.0777% ( 10) 00:07:57.520 11393.182 - 11443.594: 97.1365% ( 9) 00:07:57.520 11443.594 - 11494.006: 97.1953% ( 9) 00:07:57.520 11494.006 - 11544.418: 97.2280% ( 5) 00:07:57.520 11544.418 - 11594.831: 97.2673% ( 6) 00:07:57.520 11594.831 - 11645.243: 97.3130% ( 7) 00:07:57.520 11645.243 - 11695.655: 97.3261% ( 2) 00:07:57.520 11695.655 - 11746.068: 97.3588% ( 5) 00:07:57.520 11746.068 - 11796.480: 97.4046% ( 7) 00:07:57.520 11796.480 - 11846.892: 97.4438% ( 6) 00:07:57.520 11846.892 - 11897.305: 97.4830% ( 6) 00:07:57.520 11897.305 - 11947.717: 97.5157% ( 5) 00:07:57.520 11947.717 - 11998.129: 97.5549% ( 6) 00:07:57.520 11998.129 - 12048.542: 97.5876% ( 5) 00:07:57.520 12048.542 - 12098.954: 97.6268% ( 6) 00:07:57.520 12098.954 - 12149.366: 97.6595% ( 5) 00:07:57.520 12149.366 - 12199.778: 97.6922% ( 5) 00:07:57.520 12199.778 - 12250.191: 97.7380% ( 7) 00:07:57.520 12250.191 - 12300.603: 97.7707% ( 5) 00:07:57.520 12300.603 - 12351.015: 97.8164% ( 7) 00:07:57.520 12351.015 - 12401.428: 97.8491% ( 5) 00:07:57.520 12401.428 - 12451.840: 97.8687% ( 3) 00:07:57.520 12451.840 - 12502.252: 97.8883% ( 3) 00:07:57.520 12502.252 - 12552.665: 97.9014% ( 2) 00:07:57.520 12552.665 - 12603.077: 97.9079% ( 1) 00:07:57.520 12905.551 - 13006.375: 97.9341% ( 4) 00:07:57.520 13006.375 - 13107.200: 98.0322% ( 15) 00:07:57.520 13107.200 - 13208.025: 98.3198% ( 44) 00:07:57.520 13208.025 - 13308.849: 98.3917% ( 11) 00:07:57.520 13308.849 - 13409.674: 98.5683% ( 27) 00:07:57.520 13409.674 - 13510.498: 98.7578% ( 29) 00:07:57.520 13510.498 - 13611.323: 99.0782% ( 49) 00:07:57.520 13611.323 - 13712.148: 99.1436% ( 10) 00:07:57.520 13712.148 - 13812.972: 99.1632% ( 3) 00:07:57.520 18551.729 - 18652.554: 99.1697% ( 1) 00:07:57.520 19055.852 - 19156.677: 99.1763% ( 1) 00:07:57.520 19559.975 - 19660.800: 99.2482% ( 11) 00:07:57.520 19660.800 - 19761.625: 99.3070% ( 9) 00:07:57.520 19761.625 - 19862.449: 99.3658% ( 9) 00:07:57.521 19862.449 - 19963.274: 99.4312% ( 10) 00:07:57.521 19963.274 - 20064.098: 99.4770% ( 7) 00:07:57.521 20064.098 - 20164.923: 99.5031% ( 4) 00:07:57.521 20164.923 - 20265.748: 99.5228% ( 3) 00:07:57.521 20265.748 - 20366.572: 99.5489% ( 4) 00:07:57.521 20366.572 - 20467.397: 99.5685% ( 3) 00:07:57.521 20467.397 - 20568.222: 99.5816% ( 2) 00:07:57.521 23088.837 - 23189.662: 99.6012% ( 3) 00:07:57.521 23189.662 - 23290.486: 99.6274% ( 4) 00:07:57.521 23290.486 - 23391.311: 99.6535% ( 4) 00:07:57.521 23391.311 - 23492.135: 99.6731% ( 3) 00:07:57.521 23492.135 - 23592.960: 99.6993% ( 4) 00:07:57.521 23592.960 - 23693.785: 99.7189% ( 3) 00:07:57.521 23693.785 - 23794.609: 99.7450% ( 4) 00:07:57.521 23794.609 - 23895.434: 99.7712% ( 4) 00:07:57.521 23895.434 - 23996.258: 99.7973% ( 4) 00:07:57.521 23996.258 - 24097.083: 99.8169% ( 3) 00:07:57.521 24097.083 - 24197.908: 99.8431% ( 4) 00:07:57.521 24197.908 - 24298.732: 99.8627% ( 3) 00:07:57.521 24298.732 - 24399.557: 99.8889% ( 4) 00:07:57.521 24399.557 - 24500.382: 99.9150% ( 4) 00:07:57.521 24500.382 - 24601.206: 99.9412% ( 4) 00:07:57.521 24601.206 - 24702.031: 99.9673% ( 4) 00:07:57.521 24702.031 - 24802.855: 99.9935% ( 4) 00:07:57.521 24802.855 - 24903.680: 100.0000% ( 1) 00:07:57.521 00:07:57.521 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:57.521 ============================================================================== 00:07:57.521 Range in us Cumulative IO count 00:07:57.521 5923.446 - 5948.652: 0.0065% ( 1) 00:07:57.521 5948.652 - 5973.858: 0.0131% ( 1) 00:07:57.521 5973.858 - 5999.065: 0.0262% ( 2) 00:07:57.521 5999.065 - 6024.271: 0.0392% ( 2) 00:07:57.521 6024.271 - 6049.477: 0.0588% ( 3) 00:07:57.521 6049.477 - 6074.683: 0.0719% ( 2) 00:07:57.521 6074.683 - 6099.889: 0.0915% ( 3) 00:07:57.521 6099.889 - 6125.095: 0.1111% ( 3) 00:07:57.521 6125.095 - 6150.302: 0.1373% ( 4) 00:07:57.521 6150.302 - 6175.508: 0.1569% ( 3) 00:07:57.521 6175.508 - 6200.714: 0.1961% ( 6) 00:07:57.521 6200.714 - 6225.920: 0.2419% ( 7) 00:07:57.521 6225.920 - 6251.126: 0.3465% ( 16) 00:07:57.521 6251.126 - 6276.332: 0.3726% ( 4) 00:07:57.521 6276.332 - 6301.538: 0.3988% ( 4) 00:07:57.521 6301.538 - 6326.745: 0.5361% ( 21) 00:07:57.521 6326.745 - 6351.951: 0.6145% ( 12) 00:07:57.521 6351.951 - 6377.157: 0.6538% ( 6) 00:07:57.521 6377.157 - 6402.363: 0.6995% ( 7) 00:07:57.521 6402.363 - 6427.569: 0.7322% ( 5) 00:07:57.521 6427.569 - 6452.775: 0.7845% ( 8) 00:07:57.521 6452.775 - 6503.188: 0.8891% ( 16) 00:07:57.521 6503.188 - 6553.600: 1.1049% ( 33) 00:07:57.521 6553.600 - 6604.012: 1.3206% ( 33) 00:07:57.521 6604.012 - 6654.425: 1.4971% ( 27) 00:07:57.521 6654.425 - 6704.837: 1.8567% ( 55) 00:07:57.521 6704.837 - 6755.249: 2.2293% ( 57) 00:07:57.521 6755.249 - 6805.662: 2.5235% ( 45) 00:07:57.521 6805.662 - 6856.074: 3.1185% ( 91) 00:07:57.521 6856.074 - 6906.486: 3.6088% ( 75) 00:07:57.521 6906.486 - 6956.898: 4.3410% ( 112) 00:07:57.521 6956.898 - 7007.311: 5.3347% ( 152) 00:07:57.521 7007.311 - 7057.723: 6.0866% ( 115) 00:07:57.521 7057.723 - 7108.135: 6.9430% ( 131) 00:07:57.521 7108.135 - 7158.548: 8.0021% ( 162) 00:07:57.521 7158.548 - 7208.960: 9.5581% ( 238) 00:07:57.521 7208.960 - 7259.372: 11.5324% ( 302) 00:07:57.521 7259.372 - 7309.785: 13.4414% ( 292) 00:07:57.521 7309.785 - 7360.197: 15.6773% ( 342) 00:07:57.521 7360.197 - 7410.609: 18.5146% ( 434) 00:07:57.521 7410.609 - 7461.022: 21.6135% ( 474) 00:07:57.521 7461.022 - 7511.434: 24.0782% ( 377) 00:07:57.521 7511.434 - 7561.846: 26.6017% ( 386) 00:07:57.521 7561.846 - 7612.258: 29.1645% ( 392) 00:07:57.521 7612.258 - 7662.671: 31.9038% ( 419) 00:07:57.521 7662.671 - 7713.083: 34.7019% ( 428) 00:07:57.521 7713.083 - 7763.495: 37.0162% ( 354) 00:07:57.521 7763.495 - 7813.908: 39.1998% ( 334) 00:07:57.521 7813.908 - 7864.320: 41.4945% ( 351) 00:07:57.521 7864.320 - 7914.732: 44.0180% ( 386) 00:07:57.521 7914.732 - 7965.145: 46.2931% ( 348) 00:07:57.521 7965.145 - 8015.557: 48.6794% ( 365) 00:07:57.521 8015.557 - 8065.969: 50.7191% ( 312) 00:07:57.521 8065.969 - 8116.382: 52.9877% ( 347) 00:07:57.521 8116.382 - 8166.794: 55.0405% ( 314) 00:07:57.521 8166.794 - 8217.206: 56.7076% ( 255) 00:07:57.521 8217.206 - 8267.618: 58.2505% ( 236) 00:07:57.521 8267.618 - 8318.031: 59.9699% ( 263) 00:07:57.521 8318.031 - 8368.443: 61.2644% ( 198) 00:07:57.521 8368.443 - 8418.855: 62.5588% ( 198) 00:07:57.521 8418.855 - 8469.268: 63.7552% ( 183) 00:07:57.521 8469.268 - 8519.680: 65.1478% ( 213) 00:07:57.521 8519.680 - 8570.092: 66.7037% ( 238) 00:07:57.521 8570.092 - 8620.505: 68.1812% ( 226) 00:07:57.521 8620.505 - 8670.917: 69.8287% ( 252) 00:07:57.521 8670.917 - 8721.329: 71.4893% ( 254) 00:07:57.521 8721.329 - 8771.742: 73.3264% ( 281) 00:07:57.521 8771.742 - 8822.154: 74.9019% ( 241) 00:07:57.521 8822.154 - 8872.566: 76.1441% ( 190) 00:07:57.521 8872.566 - 8922.978: 77.6020% ( 223) 00:07:57.521 8922.978 - 8973.391: 78.8115% ( 185) 00:07:57.521 8973.391 - 9023.803: 80.0536% ( 190) 00:07:57.521 9023.803 - 9074.215: 81.2827% ( 188) 00:07:57.521 9074.215 - 9124.628: 82.2568% ( 149) 00:07:57.521 9124.628 - 9175.040: 83.3617% ( 169) 00:07:57.521 9175.040 - 9225.452: 84.2769% ( 140) 00:07:57.521 9225.452 - 9275.865: 84.9961% ( 110) 00:07:57.521 9275.865 - 9326.277: 85.7741% ( 119) 00:07:57.521 9326.277 - 9376.689: 86.5455% ( 118) 00:07:57.521 9376.689 - 9427.102: 87.2254% ( 104) 00:07:57.521 9427.102 - 9477.514: 87.8988% ( 103) 00:07:57.521 9477.514 - 9527.926: 88.4545% ( 85) 00:07:57.521 9527.926 - 9578.338: 88.8991% ( 68) 00:07:57.521 9578.338 - 9628.751: 89.3632% ( 71) 00:07:57.521 9628.751 - 9679.163: 89.8339% ( 72) 00:07:57.521 9679.163 - 9729.575: 90.4158% ( 89) 00:07:57.521 9729.575 - 9779.988: 91.0826% ( 102) 00:07:57.521 9779.988 - 9830.400: 91.6645% ( 89) 00:07:57.521 9830.400 - 9880.812: 92.1548% ( 75) 00:07:57.521 9880.812 - 9931.225: 92.6451% ( 75) 00:07:57.521 9931.225 - 9981.637: 93.0439% ( 61) 00:07:57.521 9981.637 - 10032.049: 93.3512% ( 47) 00:07:57.521 10032.049 - 10082.462: 93.6912% ( 52) 00:07:57.521 10082.462 - 10132.874: 94.0050% ( 48) 00:07:57.521 10132.874 - 10183.286: 94.4103% ( 62) 00:07:57.521 10183.286 - 10233.698: 94.7176% ( 47) 00:07:57.521 10233.698 - 10284.111: 94.9595% ( 37) 00:07:57.521 10284.111 - 10334.523: 95.1491% ( 29) 00:07:57.521 10334.523 - 10384.935: 95.3452% ( 30) 00:07:57.521 10384.935 - 10435.348: 95.5348% ( 29) 00:07:57.521 10435.348 - 10485.760: 95.7113% ( 27) 00:07:57.521 10485.760 - 10536.172: 95.8355% ( 19) 00:07:57.521 10536.172 - 10586.585: 95.9728% ( 21) 00:07:57.521 10586.585 - 10636.997: 96.1101% ( 21) 00:07:57.521 10636.997 - 10687.409: 96.2082% ( 15) 00:07:57.521 10687.409 - 10737.822: 96.2866% ( 12) 00:07:57.521 10737.822 - 10788.234: 96.3585% ( 11) 00:07:57.521 10788.234 - 10838.646: 96.4174% ( 9) 00:07:57.521 10838.646 - 10889.058: 96.4893% ( 11) 00:07:57.521 10889.058 - 10939.471: 96.5612% ( 11) 00:07:57.521 10939.471 - 10989.883: 96.6135% ( 8) 00:07:57.521 10989.883 - 11040.295: 96.6658% ( 8) 00:07:57.521 11040.295 - 11090.708: 96.6985% ( 5) 00:07:57.521 11090.708 - 11141.120: 96.7639% ( 10) 00:07:57.521 11141.120 - 11191.532: 96.8358% ( 11) 00:07:57.521 11191.532 - 11241.945: 96.9142% ( 12) 00:07:57.521 11241.945 - 11292.357: 96.9665% ( 8) 00:07:57.521 11292.357 - 11342.769: 97.0515% ( 13) 00:07:57.521 11342.769 - 11393.182: 97.1038% ( 8) 00:07:57.521 11393.182 - 11443.594: 97.1430% ( 6) 00:07:57.521 11443.594 - 11494.006: 97.1692% ( 4) 00:07:57.521 11494.006 - 11544.418: 97.2084% ( 6) 00:07:57.521 11544.418 - 11594.831: 97.2346% ( 4) 00:07:57.521 11594.831 - 11645.243: 97.2607% ( 4) 00:07:57.521 11645.243 - 11695.655: 97.2869% ( 4) 00:07:57.521 11695.655 - 11746.068: 97.3196% ( 5) 00:07:57.521 11746.068 - 11796.480: 97.3522% ( 5) 00:07:57.521 11796.480 - 11846.892: 97.3784% ( 4) 00:07:57.521 11846.892 - 11897.305: 97.3915% ( 2) 00:07:57.521 11897.305 - 11947.717: 97.4372% ( 7) 00:07:57.521 11947.717 - 11998.129: 97.5157% ( 12) 00:07:57.521 11998.129 - 12048.542: 97.5615% ( 7) 00:07:57.521 12048.542 - 12098.954: 97.5811% ( 3) 00:07:57.521 12098.954 - 12149.366: 97.6203% ( 6) 00:07:57.521 12149.366 - 12199.778: 97.6464% ( 4) 00:07:57.521 12199.778 - 12250.191: 97.6726% ( 4) 00:07:57.521 12250.191 - 12300.603: 97.7053% ( 5) 00:07:57.521 12300.603 - 12351.015: 97.7380% ( 5) 00:07:57.521 12351.015 - 12401.428: 97.7445% ( 1) 00:07:57.521 12401.428 - 12451.840: 97.7903% ( 7) 00:07:57.521 12451.840 - 12502.252: 97.8687% ( 12) 00:07:57.521 12502.252 - 12552.665: 97.9799% ( 17) 00:07:57.521 12552.665 - 12603.077: 98.0452% ( 10) 00:07:57.521 12603.077 - 12653.489: 98.0649% ( 3) 00:07:57.521 12653.489 - 12703.902: 98.0910% ( 4) 00:07:57.521 12703.902 - 12754.314: 98.1106% ( 3) 00:07:57.521 12754.314 - 12804.726: 98.1368% ( 4) 00:07:57.521 12804.726 - 12855.138: 98.1498% ( 2) 00:07:57.521 12855.138 - 12905.551: 98.1629% ( 2) 00:07:57.521 12905.551 - 13006.375: 98.1891% ( 4) 00:07:57.521 13006.375 - 13107.200: 98.2152% ( 4) 00:07:57.521 13107.200 - 13208.025: 98.2348% ( 3) 00:07:57.521 13208.025 - 13308.849: 98.3917% ( 24) 00:07:57.521 13308.849 - 13409.674: 98.6990% ( 47) 00:07:57.521 13409.674 - 13510.498: 98.8886% ( 29) 00:07:57.521 13510.498 - 13611.323: 99.0390% ( 23) 00:07:57.521 13611.323 - 13712.148: 99.0847% ( 7) 00:07:57.521 13712.148 - 13812.972: 99.1174% ( 5) 00:07:57.521 13812.972 - 13913.797: 99.1501% ( 5) 00:07:57.521 13913.797 - 14014.622: 99.1632% ( 2) 00:07:57.521 18450.905 - 18551.729: 99.1763% ( 2) 00:07:57.521 18551.729 - 18652.554: 99.2220% ( 7) 00:07:57.522 18652.554 - 18753.378: 99.2612% ( 6) 00:07:57.522 18753.378 - 18854.203: 99.3658% ( 16) 00:07:57.522 18854.203 - 18955.028: 99.3855% ( 3) 00:07:57.522 18955.028 - 19055.852: 99.4051% ( 3) 00:07:57.522 19055.852 - 19156.677: 99.4312% ( 4) 00:07:57.522 19156.677 - 19257.502: 99.4508% ( 3) 00:07:57.522 19257.502 - 19358.326: 99.4704% ( 3) 00:07:57.522 19358.326 - 19459.151: 99.4901% ( 3) 00:07:57.522 19459.151 - 19559.975: 99.5097% ( 3) 00:07:57.522 19559.975 - 19660.800: 99.5358% ( 4) 00:07:57.522 19660.800 - 19761.625: 99.5554% ( 3) 00:07:57.522 19761.625 - 19862.449: 99.5685% ( 2) 00:07:57.522 19862.449 - 19963.274: 99.5816% ( 2) 00:07:57.522 22383.065 - 22483.889: 99.5947% ( 2) 00:07:57.522 22483.889 - 22584.714: 99.6143% ( 3) 00:07:57.522 22584.714 - 22685.538: 99.7385% ( 19) 00:07:57.522 22685.538 - 22786.363: 99.8235% ( 13) 00:07:57.522 22786.363 - 22887.188: 99.9085% ( 13) 00:07:57.522 22887.188 - 22988.012: 100.0000% ( 14) 00:07:57.522 00:07:57.522 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:57.522 ============================================================================== 00:07:57.522 Range in us Cumulative IO count 00:07:57.522 6074.683 - 6099.889: 0.0065% ( 1) 00:07:57.522 6150.302 - 6175.508: 0.0131% ( 1) 00:07:57.522 6175.508 - 6200.714: 0.0262% ( 2) 00:07:57.522 6200.714 - 6225.920: 0.0458% ( 3) 00:07:57.522 6225.920 - 6251.126: 0.0654% ( 3) 00:07:57.522 6251.126 - 6276.332: 0.1046% ( 6) 00:07:57.522 6276.332 - 6301.538: 0.2877% ( 28) 00:07:57.522 6301.538 - 6326.745: 0.3726% ( 13) 00:07:57.522 6326.745 - 6351.951: 0.4184% ( 7) 00:07:57.522 6351.951 - 6377.157: 0.4969% ( 12) 00:07:57.522 6377.157 - 6402.363: 0.5557% ( 9) 00:07:57.522 6402.363 - 6427.569: 0.6603% ( 16) 00:07:57.522 6427.569 - 6452.775: 0.7976% ( 21) 00:07:57.522 6452.775 - 6503.188: 0.9872% ( 29) 00:07:57.522 6503.188 - 6553.600: 1.1833% ( 30) 00:07:57.522 6553.600 - 6604.012: 1.5429% ( 55) 00:07:57.522 6604.012 - 6654.425: 1.7782% ( 36) 00:07:57.522 6654.425 - 6704.837: 2.0724% ( 45) 00:07:57.522 6704.837 - 6755.249: 2.3732% ( 46) 00:07:57.522 6755.249 - 6805.662: 2.6281% ( 39) 00:07:57.522 6805.662 - 6856.074: 2.9812% ( 54) 00:07:57.522 6856.074 - 6906.486: 3.3538% ( 57) 00:07:57.522 6906.486 - 6956.898: 3.9487% ( 91) 00:07:57.522 6956.898 - 7007.311: 4.6810% ( 112) 00:07:57.522 7007.311 - 7057.723: 5.7139% ( 158) 00:07:57.522 7057.723 - 7108.135: 6.5703% ( 131) 00:07:57.522 7108.135 - 7158.548: 7.7014% ( 173) 00:07:57.522 7158.548 - 7208.960: 9.1919% ( 228) 00:07:57.522 7208.960 - 7259.372: 10.8198% ( 249) 00:07:57.522 7259.372 - 7309.785: 13.0688% ( 344) 00:07:57.522 7309.785 - 7360.197: 15.4681% ( 367) 00:07:57.522 7360.197 - 7410.609: 17.5275% ( 315) 00:07:57.522 7410.609 - 7461.022: 19.9987% ( 378) 00:07:57.522 7461.022 - 7511.434: 22.7837% ( 426) 00:07:57.522 7511.434 - 7561.846: 25.7911% ( 460) 00:07:57.522 7561.846 - 7612.258: 29.3214% ( 540) 00:07:57.522 7612.258 - 7662.671: 32.2960% ( 455) 00:07:57.522 7662.671 - 7713.083: 34.8392% ( 389) 00:07:57.522 7713.083 - 7763.495: 37.2123% ( 363) 00:07:57.522 7763.495 - 7813.908: 39.6640% ( 375) 00:07:57.522 7813.908 - 7864.320: 41.9456% ( 349) 00:07:57.522 7864.320 - 7914.732: 44.3776% ( 372) 00:07:57.522 7914.732 - 7965.145: 46.8881% ( 384) 00:07:57.522 7965.145 - 8015.557: 48.8167% ( 295) 00:07:57.522 8015.557 - 8065.969: 50.6995% ( 288) 00:07:57.522 8065.969 - 8116.382: 52.7001% ( 306) 00:07:57.522 8116.382 - 8166.794: 54.4325% ( 265) 00:07:57.522 8166.794 - 8217.206: 56.2173% ( 273) 00:07:57.522 8217.206 - 8267.618: 57.9171% ( 260) 00:07:57.522 8267.618 - 8318.031: 59.5058% ( 243) 00:07:57.522 8318.031 - 8368.443: 60.9833% ( 226) 00:07:57.522 8368.443 - 8418.855: 62.5065% ( 233) 00:07:57.522 8418.855 - 8469.268: 63.8794% ( 210) 00:07:57.522 8469.268 - 8519.680: 65.3177% ( 220) 00:07:57.522 8519.680 - 8570.092: 66.8672% ( 237) 00:07:57.522 8570.092 - 8620.505: 68.3643% ( 229) 00:07:57.522 8620.505 - 8670.917: 69.8745% ( 231) 00:07:57.522 8670.917 - 8721.329: 71.4958% ( 248) 00:07:57.522 8721.329 - 8771.742: 73.1695% ( 256) 00:07:57.522 8771.742 - 8822.154: 74.9281% ( 269) 00:07:57.522 8822.154 - 8872.566: 76.3402% ( 216) 00:07:57.522 8872.566 - 8922.978: 77.5366% ( 183) 00:07:57.522 8922.978 - 8973.391: 78.6938% ( 177) 00:07:57.522 8973.391 - 9023.803: 79.9490% ( 192) 00:07:57.522 9023.803 - 9074.215: 81.2304% ( 196) 00:07:57.522 9074.215 - 9124.628: 82.4922% ( 193) 00:07:57.522 9124.628 - 9175.040: 83.5186% ( 157) 00:07:57.522 9175.040 - 9225.452: 84.5907% ( 164) 00:07:57.522 9225.452 - 9275.865: 85.3818% ( 121) 00:07:57.522 9275.865 - 9326.277: 86.1402% ( 116) 00:07:57.522 9326.277 - 9376.689: 86.9116% ( 118) 00:07:57.522 9376.689 - 9427.102: 87.7942% ( 135) 00:07:57.522 9427.102 - 9477.514: 88.5003% ( 108) 00:07:57.522 9477.514 - 9527.926: 89.0952% ( 91) 00:07:57.522 9527.926 - 9578.338: 89.6051% ( 78) 00:07:57.522 9578.338 - 9628.751: 90.0824% ( 73) 00:07:57.522 9628.751 - 9679.163: 90.5400% ( 70) 00:07:57.522 9679.163 - 9729.575: 91.0761% ( 82) 00:07:57.522 9729.575 - 9779.988: 91.5207% ( 68) 00:07:57.522 9779.988 - 9830.400: 91.8802% ( 55) 00:07:57.522 9830.400 - 9880.812: 92.2398% ( 55) 00:07:57.522 9880.812 - 9931.225: 92.6778% ( 67) 00:07:57.522 9931.225 - 9981.637: 93.1224% ( 68) 00:07:57.522 9981.637 - 10032.049: 93.4950% ( 57) 00:07:57.522 10032.049 - 10082.462: 94.0507% ( 85) 00:07:57.522 10082.462 - 10132.874: 94.4234% ( 57) 00:07:57.522 10132.874 - 10183.286: 94.7503% ( 50) 00:07:57.522 10183.286 - 10233.698: 95.0118% ( 40) 00:07:57.522 10233.698 - 10284.111: 95.1817% ( 26) 00:07:57.522 10284.111 - 10334.523: 95.3452% ( 25) 00:07:57.522 10334.523 - 10384.935: 95.4825% ( 21) 00:07:57.522 10384.935 - 10435.348: 95.6263% ( 22) 00:07:57.522 10435.348 - 10485.760: 95.7571% ( 20) 00:07:57.522 10485.760 - 10536.172: 95.8747% ( 18) 00:07:57.522 10536.172 - 10586.585: 96.0120% ( 21) 00:07:57.522 10586.585 - 10636.997: 96.1232% ( 17) 00:07:57.522 10636.997 - 10687.409: 96.2147% ( 14) 00:07:57.522 10687.409 - 10737.822: 96.2801% ( 10) 00:07:57.522 10737.822 - 10788.234: 96.3324% ( 8) 00:07:57.522 10788.234 - 10838.646: 96.3847% ( 8) 00:07:57.522 10838.646 - 10889.058: 96.4370% ( 8) 00:07:57.522 10889.058 - 10939.471: 96.4827% ( 7) 00:07:57.522 10939.471 - 10989.883: 96.5285% ( 7) 00:07:57.522 10989.883 - 11040.295: 96.5677% ( 6) 00:07:57.522 11040.295 - 11090.708: 96.6070% ( 6) 00:07:57.522 11090.708 - 11141.120: 96.6396% ( 5) 00:07:57.522 11141.120 - 11191.532: 96.6723% ( 5) 00:07:57.522 11191.532 - 11241.945: 96.7050% ( 5) 00:07:57.522 11241.945 - 11292.357: 96.7312% ( 4) 00:07:57.522 11292.357 - 11342.769: 96.7639% ( 5) 00:07:57.522 11342.769 - 11393.182: 96.7965% ( 5) 00:07:57.522 11393.182 - 11443.594: 96.8096% ( 2) 00:07:57.522 11443.594 - 11494.006: 96.8227% ( 2) 00:07:57.522 11494.006 - 11544.418: 96.8358% ( 2) 00:07:57.522 11544.418 - 11594.831: 96.8554% ( 3) 00:07:57.522 11594.831 - 11645.243: 96.8685% ( 2) 00:07:57.522 11645.243 - 11695.655: 96.8946% ( 4) 00:07:57.522 11695.655 - 11746.068: 96.9142% ( 3) 00:07:57.522 11746.068 - 11796.480: 96.9469% ( 5) 00:07:57.522 11796.480 - 11846.892: 96.9927% ( 7) 00:07:57.522 11846.892 - 11897.305: 97.0973% ( 16) 00:07:57.522 11897.305 - 11947.717: 97.1627% ( 10) 00:07:57.522 11947.717 - 11998.129: 97.2476% ( 13) 00:07:57.522 11998.129 - 12048.542: 97.3326% ( 13) 00:07:57.522 12048.542 - 12098.954: 97.3980% ( 10) 00:07:57.522 12098.954 - 12149.366: 97.4307% ( 5) 00:07:57.522 12149.366 - 12199.778: 97.4569% ( 4) 00:07:57.522 12199.778 - 12250.191: 97.4830% ( 4) 00:07:57.522 12250.191 - 12300.603: 97.5092% ( 4) 00:07:57.522 12300.603 - 12351.015: 97.5418% ( 5) 00:07:57.522 12351.015 - 12401.428: 97.6007% ( 9) 00:07:57.522 12401.428 - 12451.840: 97.6595% ( 9) 00:07:57.522 12451.840 - 12502.252: 97.7707% ( 17) 00:07:57.522 12502.252 - 12552.665: 97.8556% ( 13) 00:07:57.522 12552.665 - 12603.077: 97.9668% ( 17) 00:07:57.522 12603.077 - 12653.489: 98.0714% ( 16) 00:07:57.522 12653.489 - 12703.902: 98.1956% ( 19) 00:07:57.522 12703.902 - 12754.314: 98.2610% ( 10) 00:07:57.522 12754.314 - 12804.726: 98.3656% ( 16) 00:07:57.522 12804.726 - 12855.138: 98.4375% ( 11) 00:07:57.522 12855.138 - 12905.551: 98.5486% ( 17) 00:07:57.522 12905.551 - 13006.375: 98.6794% ( 20) 00:07:57.522 13006.375 - 13107.200: 98.7775% ( 15) 00:07:57.522 13107.200 - 13208.025: 98.8624% ( 13) 00:07:57.522 13208.025 - 13308.849: 98.9671% ( 16) 00:07:57.522 13308.849 - 13409.674: 99.0651% ( 15) 00:07:57.522 13409.674 - 13510.498: 99.1370% ( 11) 00:07:57.522 13510.498 - 13611.323: 99.1632% ( 4) 00:07:57.522 16837.711 - 16938.535: 99.1697% ( 1) 00:07:57.522 17140.185 - 17241.009: 99.2089% ( 6) 00:07:57.522 17241.009 - 17341.834: 99.2286% ( 3) 00:07:57.522 17341.834 - 17442.658: 99.2482% ( 3) 00:07:57.522 17442.658 - 17543.483: 99.3005% ( 8) 00:07:57.522 17543.483 - 17644.308: 99.4116% ( 17) 00:07:57.522 17644.308 - 17745.132: 99.4835% ( 11) 00:07:57.522 17745.132 - 17845.957: 99.5816% ( 15) 00:07:57.522 20971.520 - 21072.345: 99.6470% ( 10) 00:07:57.522 21072.345 - 21173.169: 99.7254% ( 12) 00:07:57.522 21173.169 - 21273.994: 99.8431% ( 18) 00:07:57.522 21273.994 - 21374.818: 99.9281% ( 13) 00:07:57.522 21374.818 - 21475.643: 99.9412% ( 2) 00:07:57.522 21576.468 - 21677.292: 99.9608% ( 3) 00:07:57.522 21677.292 - 21778.117: 99.9804% ( 3) 00:07:57.522 21778.117 - 21878.942: 100.0000% ( 3) 00:07:57.522 00:07:57.522 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:57.522 ============================================================================== 00:07:57.523 Range in us Cumulative IO count 00:07:57.523 6099.889 - 6125.095: 0.0065% ( 1) 00:07:57.523 6125.095 - 6150.302: 0.0131% ( 1) 00:07:57.523 6175.508 - 6200.714: 0.0196% ( 1) 00:07:57.523 6200.714 - 6225.920: 0.0262% ( 1) 00:07:57.523 6251.126 - 6276.332: 0.0327% ( 1) 00:07:57.523 6276.332 - 6301.538: 0.0588% ( 4) 00:07:57.523 6301.538 - 6326.745: 0.0850% ( 4) 00:07:57.523 6326.745 - 6351.951: 0.1373% ( 8) 00:07:57.523 6351.951 - 6377.157: 0.1896% ( 8) 00:07:57.523 6377.157 - 6402.363: 0.2877% ( 15) 00:07:57.523 6402.363 - 6427.569: 0.3792% ( 14) 00:07:57.523 6427.569 - 6452.775: 0.5688% ( 29) 00:07:57.523 6452.775 - 6503.188: 0.9087% ( 52) 00:07:57.523 6503.188 - 6553.600: 1.3729% ( 71) 00:07:57.523 6553.600 - 6604.012: 1.7848% ( 63) 00:07:57.523 6604.012 - 6654.425: 2.0136% ( 35) 00:07:57.523 6654.425 - 6704.837: 2.2032% ( 29) 00:07:57.523 6704.837 - 6755.249: 2.4712% ( 41) 00:07:57.523 6755.249 - 6805.662: 2.7524% ( 43) 00:07:57.523 6805.662 - 6856.074: 3.0531% ( 46) 00:07:57.523 6856.074 - 6906.486: 3.6219% ( 87) 00:07:57.523 6906.486 - 6956.898: 4.1579% ( 82) 00:07:57.523 6956.898 - 7007.311: 4.7660% ( 93) 00:07:57.523 7007.311 - 7057.723: 5.7793% ( 155) 00:07:57.523 7057.723 - 7108.135: 6.7730% ( 152) 00:07:57.523 7108.135 - 7158.548: 7.7798% ( 154) 00:07:57.523 7158.548 - 7208.960: 9.2573% ( 226) 00:07:57.523 7208.960 - 7259.372: 10.6956% ( 220) 00:07:57.523 7259.372 - 7309.785: 12.3627% ( 255) 00:07:57.523 7309.785 - 7360.197: 14.4874% ( 325) 00:07:57.523 7360.197 - 7410.609: 16.6514% ( 331) 00:07:57.523 7410.609 - 7461.022: 19.4691% ( 431) 00:07:57.523 7461.022 - 7511.434: 22.3130% ( 435) 00:07:57.523 7511.434 - 7561.846: 25.5557% ( 496) 00:07:57.523 7561.846 - 7612.258: 28.2688% ( 415) 00:07:57.523 7612.258 - 7662.671: 31.2042% ( 449) 00:07:57.523 7662.671 - 7713.083: 34.4012% ( 489) 00:07:57.523 7713.083 - 7763.495: 37.2712% ( 439) 00:07:57.523 7763.495 - 7813.908: 40.0235% ( 421) 00:07:57.523 7813.908 - 7864.320: 42.6321% ( 399) 00:07:57.523 7864.320 - 7914.732: 45.1098% ( 379) 00:07:57.523 7914.732 - 7965.145: 47.6530% ( 389) 00:07:57.523 7965.145 - 8015.557: 49.7516% ( 321) 00:07:57.523 8015.557 - 8065.969: 51.4906% ( 266) 00:07:57.523 8065.969 - 8116.382: 53.0792% ( 243) 00:07:57.523 8116.382 - 8166.794: 54.5829% ( 230) 00:07:57.523 8166.794 - 8217.206: 56.2042% ( 248) 00:07:57.523 8217.206 - 8267.618: 57.8517% ( 252) 00:07:57.523 8267.618 - 8318.031: 59.3946% ( 236) 00:07:57.523 8318.031 - 8368.443: 61.1336% ( 266) 00:07:57.523 8368.443 - 8418.855: 62.9315% ( 275) 00:07:57.523 8418.855 - 8469.268: 64.4417% ( 231) 00:07:57.523 8469.268 - 8519.680: 66.2003% ( 269) 00:07:57.523 8519.680 - 8570.092: 67.6059% ( 215) 00:07:57.523 8570.092 - 8620.505: 69.0638% ( 223) 00:07:57.523 8620.505 - 8670.917: 70.3190% ( 192) 00:07:57.523 8670.917 - 8721.329: 71.5481% ( 188) 00:07:57.523 8721.329 - 8771.742: 72.8622% ( 201) 00:07:57.523 8771.742 - 8822.154: 74.3593% ( 229) 00:07:57.523 8822.154 - 8872.566: 76.0591% ( 260) 00:07:57.523 8872.566 - 8922.978: 77.1313% ( 164) 00:07:57.523 8922.978 - 8973.391: 78.3277% ( 183) 00:07:57.523 8973.391 - 9023.803: 79.5894% ( 193) 00:07:57.523 9023.803 - 9074.215: 80.9231% ( 204) 00:07:57.523 9074.215 - 9124.628: 82.1980% ( 195) 00:07:57.523 9124.628 - 9175.040: 83.3944% ( 183) 00:07:57.523 9175.040 - 9225.452: 84.4992% ( 169) 00:07:57.523 9225.452 - 9275.865: 85.2641% ( 117) 00:07:57.523 9275.865 - 9326.277: 86.0094% ( 114) 00:07:57.523 9326.277 - 9376.689: 86.7482% ( 113) 00:07:57.523 9376.689 - 9427.102: 87.5654% ( 125) 00:07:57.523 9427.102 - 9477.514: 88.1799% ( 94) 00:07:57.523 9477.514 - 9527.926: 88.8141% ( 97) 00:07:57.523 9527.926 - 9578.338: 89.4155% ( 92) 00:07:57.523 9578.338 - 9628.751: 89.9974% ( 89) 00:07:57.523 9628.751 - 9679.163: 90.5531% ( 85) 00:07:57.523 9679.163 - 9729.575: 91.1938% ( 98) 00:07:57.523 9729.575 - 9779.988: 91.6645% ( 72) 00:07:57.523 9779.988 - 9830.400: 92.1287% ( 71) 00:07:57.523 9830.400 - 9880.812: 92.5863% ( 70) 00:07:57.523 9880.812 - 9931.225: 92.9524% ( 56) 00:07:57.523 9931.225 - 9981.637: 93.3120% ( 55) 00:07:57.523 9981.637 - 10032.049: 93.6062% ( 45) 00:07:57.523 10032.049 - 10082.462: 93.9723% ( 56) 00:07:57.523 10082.462 - 10132.874: 94.2534% ( 43) 00:07:57.523 10132.874 - 10183.286: 94.5214% ( 41) 00:07:57.523 10183.286 - 10233.698: 94.7699% ( 38) 00:07:57.523 10233.698 - 10284.111: 94.9856% ( 33) 00:07:57.523 10284.111 - 10334.523: 95.1556% ( 26) 00:07:57.523 10334.523 - 10384.935: 95.2994% ( 22) 00:07:57.523 10384.935 - 10435.348: 95.4825% ( 28) 00:07:57.523 10435.348 - 10485.760: 95.6525% ( 26) 00:07:57.523 10485.760 - 10536.172: 95.8682% ( 33) 00:07:57.523 10536.172 - 10586.585: 96.0251% ( 24) 00:07:57.523 10586.585 - 10636.997: 96.1493% ( 19) 00:07:57.523 10636.997 - 10687.409: 96.2408% ( 14) 00:07:57.523 10687.409 - 10737.822: 96.3193% ( 12) 00:07:57.523 10737.822 - 10788.234: 96.3781% ( 9) 00:07:57.523 10788.234 - 10838.646: 96.4174% ( 6) 00:07:57.523 10838.646 - 10889.058: 96.4370% ( 3) 00:07:57.523 10889.058 - 10939.471: 96.4631% ( 4) 00:07:57.523 10939.471 - 10989.883: 96.4958% ( 5) 00:07:57.523 10989.883 - 11040.295: 96.5285% ( 5) 00:07:57.523 11040.295 - 11090.708: 96.5612% ( 5) 00:07:57.523 11090.708 - 11141.120: 96.6200% ( 9) 00:07:57.523 11141.120 - 11191.532: 96.7050% ( 13) 00:07:57.523 11191.532 - 11241.945: 96.7769% ( 11) 00:07:57.523 11241.945 - 11292.357: 96.8488% ( 11) 00:07:57.523 11292.357 - 11342.769: 96.8619% ( 2) 00:07:57.523 11342.769 - 11393.182: 96.8685% ( 1) 00:07:57.523 11393.182 - 11443.594: 96.8815% ( 2) 00:07:57.523 11443.594 - 11494.006: 96.8946% ( 2) 00:07:57.523 11494.006 - 11544.418: 96.9142% ( 3) 00:07:57.523 11544.418 - 11594.831: 96.9273% ( 2) 00:07:57.523 11594.831 - 11645.243: 96.9600% ( 5) 00:07:57.523 11645.243 - 11695.655: 97.0254% ( 10) 00:07:57.523 11695.655 - 11746.068: 97.0646% ( 6) 00:07:57.523 11746.068 - 11796.480: 97.1169% ( 8) 00:07:57.523 11796.480 - 11846.892: 97.1757% ( 9) 00:07:57.523 11846.892 - 11897.305: 97.2280% ( 8) 00:07:57.523 11897.305 - 11947.717: 97.3065% ( 12) 00:07:57.523 11947.717 - 11998.129: 97.3719% ( 10) 00:07:57.523 11998.129 - 12048.542: 97.4895% ( 18) 00:07:57.523 12048.542 - 12098.954: 97.5941% ( 16) 00:07:57.523 12098.954 - 12149.366: 97.6726% ( 12) 00:07:57.523 12149.366 - 12199.778: 97.7641% ( 14) 00:07:57.523 12199.778 - 12250.191: 97.8687% ( 16) 00:07:57.523 12250.191 - 12300.603: 97.9145% ( 7) 00:07:57.523 12300.603 - 12351.015: 97.9799% ( 10) 00:07:57.523 12351.015 - 12401.428: 98.0714% ( 14) 00:07:57.523 12401.428 - 12451.840: 98.1498% ( 12) 00:07:57.523 12451.840 - 12502.252: 98.2283% ( 12) 00:07:57.523 12502.252 - 12552.665: 98.2806% ( 8) 00:07:57.523 12552.665 - 12603.077: 98.3198% ( 6) 00:07:57.523 12603.077 - 12653.489: 98.3590% ( 6) 00:07:57.523 12653.489 - 12703.902: 98.4048% ( 7) 00:07:57.523 12703.902 - 12754.314: 98.4310% ( 4) 00:07:57.523 12754.314 - 12804.726: 98.4637% ( 5) 00:07:57.523 12804.726 - 12855.138: 98.4833% ( 3) 00:07:57.523 12855.138 - 12905.551: 98.4963% ( 2) 00:07:57.523 12905.551 - 13006.375: 98.5225% ( 4) 00:07:57.523 13006.375 - 13107.200: 98.5748% ( 8) 00:07:57.523 13107.200 - 13208.025: 98.6467% ( 11) 00:07:57.523 13208.025 - 13308.849: 98.7578% ( 17) 00:07:57.523 13308.849 - 13409.674: 98.8101% ( 8) 00:07:57.523 13409.674 - 13510.498: 98.8624% ( 8) 00:07:57.523 13510.498 - 13611.323: 98.9736% ( 17) 00:07:57.523 13611.323 - 13712.148: 99.0455% ( 11) 00:07:57.523 13712.148 - 13812.972: 99.0913% ( 7) 00:07:57.523 13812.972 - 13913.797: 99.1436% ( 8) 00:07:57.523 13913.797 - 14014.622: 99.1632% ( 3) 00:07:57.523 15123.692 - 15224.517: 99.1697% ( 1) 00:07:57.523 15930.289 - 16031.114: 99.2482% ( 12) 00:07:57.523 16031.114 - 16131.938: 99.3332% ( 13) 00:07:57.523 16131.938 - 16232.763: 99.4312% ( 15) 00:07:57.523 16232.763 - 16333.588: 99.5162% ( 13) 00:07:57.523 16333.588 - 16434.412: 99.5816% ( 10) 00:07:57.523 18854.203 - 18955.028: 99.6012% ( 3) 00:07:57.523 18955.028 - 19055.852: 99.6274% ( 4) 00:07:57.523 19055.852 - 19156.677: 99.6535% ( 4) 00:07:57.524 19156.677 - 19257.502: 99.6797% ( 4) 00:07:57.524 19257.502 - 19358.326: 99.6993% ( 3) 00:07:57.524 19358.326 - 19459.151: 99.7254% ( 4) 00:07:57.524 19459.151 - 19559.975: 99.7516% ( 4) 00:07:57.524 19559.975 - 19660.800: 99.7712% ( 3) 00:07:57.524 19660.800 - 19761.625: 99.7973% ( 4) 00:07:57.524 19761.625 - 19862.449: 99.8104% ( 2) 00:07:57.524 19862.449 - 19963.274: 99.9019% ( 14) 00:07:57.524 19963.274 - 20064.098: 99.9281% ( 4) 00:07:57.524 20064.098 - 20164.923: 99.9477% ( 3) 00:07:57.524 20164.923 - 20265.748: 99.9738% ( 4) 00:07:57.524 20265.748 - 20366.572: 99.9935% ( 3) 00:07:57.524 20366.572 - 20467.397: 100.0000% ( 1) 00:07:57.524 00:07:57.524 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:57.524 ============================================================================== 00:07:57.524 Range in us Cumulative IO count 00:07:57.524 6251.126 - 6276.332: 0.0131% ( 2) 00:07:57.524 6377.157 - 6402.363: 0.0262% ( 2) 00:07:57.524 6402.363 - 6427.569: 0.0654% ( 6) 00:07:57.524 6427.569 - 6452.775: 0.1308% ( 10) 00:07:57.524 6452.775 - 6503.188: 0.3073% ( 27) 00:07:57.524 6503.188 - 6553.600: 0.8107% ( 77) 00:07:57.524 6553.600 - 6604.012: 1.3991% ( 90) 00:07:57.524 6604.012 - 6654.425: 1.9482% ( 84) 00:07:57.524 6654.425 - 6704.837: 2.1836% ( 36) 00:07:57.524 6704.837 - 6755.249: 2.5105% ( 50) 00:07:57.524 6755.249 - 6805.662: 2.8439% ( 51) 00:07:57.524 6805.662 - 6856.074: 3.2558% ( 63) 00:07:57.524 6856.074 - 6906.486: 3.6611% ( 62) 00:07:57.524 6906.486 - 6956.898: 4.1841% ( 80) 00:07:57.524 6956.898 - 7007.311: 4.7398% ( 85) 00:07:57.524 7007.311 - 7057.723: 5.3674% ( 96) 00:07:57.524 7057.723 - 7108.135: 6.5311% ( 178) 00:07:57.524 7108.135 - 7158.548: 7.7340% ( 184) 00:07:57.524 7158.548 - 7208.960: 9.4469% ( 262) 00:07:57.524 7208.960 - 7259.372: 11.1925% ( 267) 00:07:57.524 7259.372 - 7309.785: 13.0034% ( 277) 00:07:57.524 7309.785 - 7360.197: 15.3831% ( 364) 00:07:57.524 7360.197 - 7410.609: 17.7432% ( 361) 00:07:57.524 7410.609 - 7461.022: 20.5413% ( 428) 00:07:57.524 7461.022 - 7511.434: 23.0845% ( 389) 00:07:57.524 7511.434 - 7561.846: 25.6668% ( 395) 00:07:57.524 7561.846 - 7612.258: 28.6676% ( 459) 00:07:57.524 7612.258 - 7662.671: 31.6749% ( 460) 00:07:57.524 7662.671 - 7713.083: 34.4927% ( 431) 00:07:57.524 7713.083 - 7763.495: 37.0620% ( 393) 00:07:57.524 7763.495 - 7813.908: 39.8666% ( 429) 00:07:57.524 7813.908 - 7864.320: 42.6386% ( 424) 00:07:57.524 7864.320 - 7914.732: 45.5805% ( 450) 00:07:57.524 7914.732 - 7965.145: 48.3852% ( 429) 00:07:57.524 7965.145 - 8015.557: 50.1504% ( 270) 00:07:57.524 8015.557 - 8065.969: 51.7456% ( 244) 00:07:57.524 8065.969 - 8116.382: 53.3407% ( 244) 00:07:57.524 8116.382 - 8166.794: 54.8771% ( 235) 00:07:57.524 8166.794 - 8217.206: 56.7599% ( 288) 00:07:57.524 8217.206 - 8267.618: 58.1917% ( 219) 00:07:57.524 8267.618 - 8318.031: 59.6169% ( 218) 00:07:57.524 8318.031 - 8368.443: 60.9898% ( 210) 00:07:57.524 8368.443 - 8418.855: 62.5262% ( 235) 00:07:57.524 8418.855 - 8469.268: 64.2063% ( 257) 00:07:57.524 8469.268 - 8519.680: 65.8015% ( 244) 00:07:57.524 8519.680 - 8570.092: 67.2594% ( 223) 00:07:57.524 8570.092 - 8620.505: 68.8023% ( 236) 00:07:57.524 8620.505 - 8670.917: 70.1360% ( 204) 00:07:57.524 8670.917 - 8721.329: 71.5089% ( 210) 00:07:57.524 8721.329 - 8771.742: 72.8491% ( 205) 00:07:57.524 8771.742 - 8822.154: 74.1697% ( 202) 00:07:57.524 8822.154 - 8872.566: 75.7257% ( 238) 00:07:57.524 8872.566 - 8922.978: 77.0267% ( 199) 00:07:57.524 8922.978 - 8973.391: 78.5826% ( 238) 00:07:57.524 8973.391 - 9023.803: 80.0667% ( 227) 00:07:57.524 9023.803 - 9074.215: 81.5573% ( 228) 00:07:57.524 9074.215 - 9124.628: 82.6687% ( 170) 00:07:57.524 9124.628 - 9175.040: 83.4859% ( 125) 00:07:57.524 9175.040 - 9225.452: 84.2377% ( 115) 00:07:57.524 9225.452 - 9275.865: 85.0811% ( 129) 00:07:57.524 9275.865 - 9326.277: 85.8460% ( 117) 00:07:57.524 9326.277 - 9376.689: 86.5651% ( 110) 00:07:57.524 9376.689 - 9427.102: 87.1666% ( 92) 00:07:57.524 9427.102 - 9477.514: 87.7615% ( 91) 00:07:57.524 9477.514 - 9527.926: 88.2584% ( 76) 00:07:57.524 9527.926 - 9578.338: 88.7225% ( 71) 00:07:57.524 9578.338 - 9628.751: 89.2848% ( 86) 00:07:57.524 9628.751 - 9679.163: 89.7293% ( 68) 00:07:57.524 9679.163 - 9729.575: 90.1412% ( 63) 00:07:57.524 9729.575 - 9779.988: 90.5923% ( 69) 00:07:57.524 9779.988 - 9830.400: 91.0826% ( 75) 00:07:57.524 9830.400 - 9880.812: 91.5860% ( 77) 00:07:57.524 9880.812 - 9931.225: 91.9979% ( 63) 00:07:57.524 9931.225 - 9981.637: 92.4621% ( 71) 00:07:57.524 9981.637 - 10032.049: 93.0243% ( 86) 00:07:57.524 10032.049 - 10082.462: 93.3054% ( 43) 00:07:57.524 10082.462 - 10132.874: 93.5866% ( 43) 00:07:57.524 10132.874 - 10183.286: 93.7958% ( 32) 00:07:57.524 10183.286 - 10233.698: 94.0442% ( 38) 00:07:57.524 10233.698 - 10284.111: 94.2992% ( 39) 00:07:57.524 10284.111 - 10334.523: 94.5672% ( 41) 00:07:57.524 10334.523 - 10384.935: 94.8483% ( 43) 00:07:57.524 10384.935 - 10435.348: 95.0641% ( 33) 00:07:57.524 10435.348 - 10485.760: 95.2537% ( 29) 00:07:57.524 10485.760 - 10536.172: 95.4236% ( 26) 00:07:57.524 10536.172 - 10586.585: 95.5936% ( 26) 00:07:57.524 10586.585 - 10636.997: 95.8094% ( 33) 00:07:57.524 10636.997 - 10687.409: 96.1689% ( 55) 00:07:57.524 10687.409 - 10737.822: 96.3912% ( 34) 00:07:57.524 10737.822 - 10788.234: 96.4893% ( 15) 00:07:57.524 10788.234 - 10838.646: 96.5677% ( 12) 00:07:57.524 10838.646 - 10889.058: 96.6789% ( 17) 00:07:57.524 10889.058 - 10939.471: 96.7639% ( 13) 00:07:57.524 10939.471 - 10989.883: 96.8750% ( 17) 00:07:57.524 10989.883 - 11040.295: 96.9731% ( 15) 00:07:57.524 11040.295 - 11090.708: 97.0777% ( 16) 00:07:57.524 11090.708 - 11141.120: 97.1365% ( 9) 00:07:57.524 11141.120 - 11191.532: 97.1823% ( 7) 00:07:57.524 11191.532 - 11241.945: 97.2150% ( 5) 00:07:57.524 11241.945 - 11292.357: 97.2803% ( 10) 00:07:57.524 11292.357 - 11342.769: 97.3522% ( 11) 00:07:57.524 11342.769 - 11393.182: 97.4111% ( 9) 00:07:57.524 11393.182 - 11443.594: 97.4503% ( 6) 00:07:57.524 11443.594 - 11494.006: 97.5092% ( 9) 00:07:57.524 11494.006 - 11544.418: 97.5876% ( 12) 00:07:57.524 11544.418 - 11594.831: 97.6661% ( 12) 00:07:57.524 11594.831 - 11645.243: 97.6857% ( 3) 00:07:57.524 11645.243 - 11695.655: 97.7118% ( 4) 00:07:57.524 11695.655 - 11746.068: 97.7380% ( 4) 00:07:57.524 11746.068 - 11796.480: 97.7576% ( 3) 00:07:57.524 11796.480 - 11846.892: 97.7837% ( 4) 00:07:57.524 11846.892 - 11897.305: 97.8033% ( 3) 00:07:57.524 11897.305 - 11947.717: 97.8295% ( 4) 00:07:57.524 11947.717 - 11998.129: 97.8556% ( 4) 00:07:57.524 11998.129 - 12048.542: 97.8622% ( 1) 00:07:57.524 12048.542 - 12098.954: 97.8883% ( 4) 00:07:57.524 12098.954 - 12149.366: 97.9145% ( 4) 00:07:57.524 12149.366 - 12199.778: 97.9472% ( 5) 00:07:57.524 12199.778 - 12250.191: 97.9733% ( 4) 00:07:57.524 12250.191 - 12300.603: 97.9864% ( 2) 00:07:57.524 12300.603 - 12351.015: 97.9995% ( 2) 00:07:57.524 12351.015 - 12401.428: 98.0191% ( 3) 00:07:57.524 12401.428 - 12451.840: 98.0322% ( 2) 00:07:57.524 12451.840 - 12502.252: 98.0452% ( 2) 00:07:57.524 12502.252 - 12552.665: 98.0649% ( 3) 00:07:57.524 12552.665 - 12603.077: 98.0845% ( 3) 00:07:57.524 12603.077 - 12653.489: 98.1172% ( 5) 00:07:57.524 12653.489 - 12703.902: 98.2218% ( 16) 00:07:57.524 12703.902 - 12754.314: 98.2610% ( 6) 00:07:57.524 12754.314 - 12804.726: 98.2806% ( 3) 00:07:57.524 12804.726 - 12855.138: 98.2871% ( 1) 00:07:57.524 12855.138 - 12905.551: 98.3133% ( 4) 00:07:57.524 12905.551 - 13006.375: 98.3460% ( 5) 00:07:57.524 13006.375 - 13107.200: 98.3787% ( 5) 00:07:57.524 13107.200 - 13208.025: 98.5160% ( 21) 00:07:57.524 13208.025 - 13308.849: 98.7252% ( 32) 00:07:57.524 13308.849 - 13409.674: 98.9082% ( 28) 00:07:57.524 13409.674 - 13510.498: 99.1763% ( 41) 00:07:57.524 13510.498 - 13611.323: 99.2874% ( 17) 00:07:57.524 13611.323 - 13712.148: 99.3658% ( 12) 00:07:57.524 13712.148 - 13812.972: 99.4116% ( 7) 00:07:57.524 13812.972 - 13913.797: 99.4312% ( 3) 00:07:57.524 13913.797 - 14014.622: 99.4574% ( 4) 00:07:57.524 14014.622 - 14115.446: 99.4770% ( 3) 00:07:57.524 14115.446 - 14216.271: 99.5031% ( 4) 00:07:57.524 14216.271 - 14317.095: 99.5228% ( 3) 00:07:57.524 14317.095 - 14417.920: 99.5489% ( 4) 00:07:57.524 14417.920 - 14518.745: 99.5685% ( 3) 00:07:57.524 14518.745 - 14619.569: 99.5816% ( 2) 00:07:57.524 17442.658 - 17543.483: 99.6012% ( 3) 00:07:57.524 17543.483 - 17644.308: 99.6208% ( 3) 00:07:57.524 17644.308 - 17745.132: 99.6339% ( 2) 00:07:57.524 17745.132 - 17845.957: 99.6927% ( 9) 00:07:57.524 18652.554 - 18753.378: 99.7189% ( 4) 00:07:57.524 18753.378 - 18854.203: 99.7385% ( 3) 00:07:57.524 18854.203 - 18955.028: 99.7581% ( 3) 00:07:57.524 18955.028 - 19055.852: 99.7843% ( 4) 00:07:57.524 19055.852 - 19156.677: 99.8039% ( 3) 00:07:57.524 19156.677 - 19257.502: 99.8300% ( 4) 00:07:57.524 19257.502 - 19358.326: 99.8496% ( 3) 00:07:57.524 19358.326 - 19459.151: 99.8758% ( 4) 00:07:57.524 19459.151 - 19559.975: 99.8954% ( 3) 00:07:57.524 19559.975 - 19660.800: 99.9215% ( 4) 00:07:57.524 19660.800 - 19761.625: 99.9412% ( 3) 00:07:57.524 19761.625 - 19862.449: 99.9608% ( 3) 00:07:57.524 19862.449 - 19963.274: 99.9869% ( 4) 00:07:57.524 19963.274 - 20064.098: 100.0000% ( 2) 00:07:57.524 00:07:57.524 02:18:44 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:57.524 00:07:57.524 real 0m2.534s 00:07:57.524 user 0m2.217s 00:07:57.524 sys 0m0.211s 00:07:57.524 02:18:44 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.524 02:18:44 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:57.524 ************************************ 00:07:57.525 END TEST nvme_perf 00:07:57.525 ************************************ 00:07:57.525 02:18:44 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:57.525 02:18:44 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:57.525 02:18:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.525 02:18:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.525 ************************************ 00:07:57.525 START TEST nvme_hello_world 00:07:57.525 ************************************ 00:07:57.525 02:18:44 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:57.525 Initializing NVMe Controllers 00:07:57.525 Attached to 0000:00:10.0 00:07:57.525 Namespace ID: 1 size: 6GB 00:07:57.525 Attached to 0000:00:11.0 00:07:57.525 Namespace ID: 1 size: 5GB 00:07:57.525 Attached to 0000:00:13.0 00:07:57.525 Namespace ID: 1 size: 1GB 00:07:57.525 Attached to 0000:00:12.0 00:07:57.525 Namespace ID: 1 size: 4GB 00:07:57.525 Namespace ID: 2 size: 4GB 00:07:57.525 Namespace ID: 3 size: 4GB 00:07:57.525 Initialization complete. 00:07:57.525 INFO: using host memory buffer for IO 00:07:57.525 Hello world! 00:07:57.525 INFO: using host memory buffer for IO 00:07:57.525 Hello world! 00:07:57.525 INFO: using host memory buffer for IO 00:07:57.525 Hello world! 00:07:57.525 INFO: using host memory buffer for IO 00:07:57.525 Hello world! 00:07:57.525 INFO: using host memory buffer for IO 00:07:57.525 Hello world! 00:07:57.525 INFO: using host memory buffer for IO 00:07:57.525 Hello world! 00:07:57.525 00:07:57.525 real 0m0.237s 00:07:57.525 user 0m0.094s 00:07:57.525 sys 0m0.098s 00:07:57.525 02:18:44 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.525 02:18:44 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:57.525 ************************************ 00:07:57.525 END TEST nvme_hello_world 00:07:57.525 ************************************ 00:07:57.782 02:18:44 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:57.782 02:18:44 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:57.782 02:18:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.782 02:18:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.782 ************************************ 00:07:57.782 START TEST nvme_sgl 00:07:57.782 ************************************ 00:07:57.782 02:18:44 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:57.782 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:57.782 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:57.782 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:57.782 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:57.782 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:57.782 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:57.782 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:57.782 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:57.782 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:58.040 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:58.040 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:58.040 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:58.040 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:58.040 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:58.040 NVMe Readv/Writev Request test 00:07:58.040 Attached to 0000:00:10.0 00:07:58.040 Attached to 0000:00:11.0 00:07:58.040 Attached to 0000:00:13.0 00:07:58.040 Attached to 0000:00:12.0 00:07:58.040 0000:00:10.0: build_io_request_2 test passed 00:07:58.040 0000:00:10.0: build_io_request_4 test passed 00:07:58.040 0000:00:10.0: build_io_request_5 test passed 00:07:58.040 0000:00:10.0: build_io_request_6 test passed 00:07:58.040 0000:00:10.0: build_io_request_7 test passed 00:07:58.040 0000:00:10.0: build_io_request_10 test passed 00:07:58.040 0000:00:11.0: build_io_request_2 test passed 00:07:58.040 0000:00:11.0: build_io_request_4 test passed 00:07:58.040 0000:00:11.0: build_io_request_5 test passed 00:07:58.040 0000:00:11.0: build_io_request_6 test passed 00:07:58.040 0000:00:11.0: build_io_request_7 test passed 00:07:58.040 0000:00:11.0: build_io_request_10 test passed 00:07:58.040 Cleaning up... 00:07:58.040 00:07:58.040 real 0m0.301s 00:07:58.040 user 0m0.158s 00:07:58.040 sys 0m0.100s 00:07:58.040 02:18:44 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.040 02:18:44 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:58.040 ************************************ 00:07:58.040 END TEST nvme_sgl 00:07:58.040 ************************************ 00:07:58.040 02:18:44 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:58.040 02:18:44 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.040 02:18:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.040 02:18:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.040 ************************************ 00:07:58.040 START TEST nvme_e2edp 00:07:58.040 ************************************ 00:07:58.040 02:18:44 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:58.298 NVMe Write/Read with End-to-End data protection test 00:07:58.298 Attached to 0000:00:10.0 00:07:58.298 Attached to 0000:00:11.0 00:07:58.298 Attached to 0000:00:13.0 00:07:58.298 Attached to 0000:00:12.0 00:07:58.298 Cleaning up... 00:07:58.298 ************************************ 00:07:58.298 END TEST nvme_e2edp 00:07:58.298 ************************************ 00:07:58.298 00:07:58.298 real 0m0.216s 00:07:58.298 user 0m0.078s 00:07:58.298 sys 0m0.096s 00:07:58.298 02:18:45 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.298 02:18:45 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:58.298 02:18:45 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:58.298 02:18:45 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.298 02:18:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.298 02:18:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.298 ************************************ 00:07:58.298 START TEST nvme_reserve 00:07:58.298 ************************************ 00:07:58.298 02:18:45 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:58.556 ===================================================== 00:07:58.556 NVMe Controller at PCI bus 0, device 16, function 0 00:07:58.556 ===================================================== 00:07:58.556 Reservations: Not Supported 00:07:58.556 ===================================================== 00:07:58.556 NVMe Controller at PCI bus 0, device 17, function 0 00:07:58.556 ===================================================== 00:07:58.556 Reservations: Not Supported 00:07:58.556 ===================================================== 00:07:58.556 NVMe Controller at PCI bus 0, device 19, function 0 00:07:58.556 ===================================================== 00:07:58.556 Reservations: Not Supported 00:07:58.556 ===================================================== 00:07:58.556 NVMe Controller at PCI bus 0, device 18, function 0 00:07:58.556 ===================================================== 00:07:58.556 Reservations: Not Supported 00:07:58.556 Reservation test passed 00:07:58.556 ************************************ 00:07:58.556 END TEST nvme_reserve 00:07:58.556 ************************************ 00:07:58.556 00:07:58.556 real 0m0.218s 00:07:58.556 user 0m0.068s 00:07:58.556 sys 0m0.101s 00:07:58.556 02:18:45 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.556 02:18:45 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:58.556 02:18:45 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:58.556 02:18:45 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.556 02:18:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.556 02:18:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.556 ************************************ 00:07:58.556 START TEST nvme_err_injection 00:07:58.556 ************************************ 00:07:58.556 02:18:45 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:58.813 NVMe Error Injection test 00:07:58.813 Attached to 0000:00:10.0 00:07:58.813 Attached to 0000:00:11.0 00:07:58.813 Attached to 0000:00:13.0 00:07:58.813 Attached to 0000:00:12.0 00:07:58.813 0000:00:10.0: get features failed as expected 00:07:58.813 0000:00:11.0: get features failed as expected 00:07:58.813 0000:00:13.0: get features failed as expected 00:07:58.813 0000:00:12.0: get features failed as expected 00:07:58.813 0000:00:10.0: get features successfully as expected 00:07:58.813 0000:00:11.0: get features successfully as expected 00:07:58.813 0000:00:13.0: get features successfully as expected 00:07:58.813 0000:00:12.0: get features successfully as expected 00:07:58.813 0000:00:10.0: read failed as expected 00:07:58.813 0000:00:11.0: read failed as expected 00:07:58.813 0000:00:13.0: read failed as expected 00:07:58.813 0000:00:12.0: read failed as expected 00:07:58.813 0000:00:10.0: read successfully as expected 00:07:58.813 0000:00:11.0: read successfully as expected 00:07:58.813 0000:00:13.0: read successfully as expected 00:07:58.813 0000:00:12.0: read successfully as expected 00:07:58.813 Cleaning up... 00:07:58.813 ************************************ 00:07:58.813 END TEST nvme_err_injection 00:07:58.813 ************************************ 00:07:58.813 00:07:58.814 real 0m0.218s 00:07:58.814 user 0m0.089s 00:07:58.814 sys 0m0.090s 00:07:58.814 02:18:45 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.814 02:18:45 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:58.814 02:18:45 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:58.814 02:18:45 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:07:58.814 02:18:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.814 02:18:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.814 ************************************ 00:07:58.814 START TEST nvme_overhead 00:07:58.814 ************************************ 00:07:58.814 02:18:45 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:00.186 Initializing NVMe Controllers 00:08:00.186 Attached to 0000:00:10.0 00:08:00.186 Attached to 0000:00:11.0 00:08:00.186 Attached to 0000:00:13.0 00:08:00.186 Attached to 0000:00:12.0 00:08:00.186 Initialization complete. Launching workers. 00:08:00.186 submit (in ns) avg, min, max = 12916.8, 11396.2, 304937.7 00:08:00.186 complete (in ns) avg, min, max = 8542.0, 7742.3, 241613.8 00:08:00.186 00:08:00.186 Submit histogram 00:08:00.186 ================ 00:08:00.186 Range in us Cumulative Count 00:08:00.186 11.372 - 11.422: 0.0084% ( 1) 00:08:00.186 11.766 - 11.815: 0.1347% ( 15) 00:08:00.186 11.815 - 11.865: 0.4546% ( 38) 00:08:00.186 11.865 - 11.914: 1.3470% ( 106) 00:08:00.186 11.914 - 11.963: 2.9635% ( 192) 00:08:00.186 11.963 - 12.012: 5.7333% ( 329) 00:08:00.186 12.012 - 12.062: 10.1701% ( 527) 00:08:00.186 12.062 - 12.111: 16.4843% ( 750) 00:08:00.186 12.111 - 12.160: 24.1792% ( 914) 00:08:00.186 12.160 - 12.209: 33.2211% ( 1074) 00:08:00.186 12.209 - 12.258: 42.7766% ( 1135) 00:08:00.186 12.258 - 12.308: 51.6164% ( 1050) 00:08:00.186 12.308 - 12.357: 60.0690% ( 1004) 00:08:00.186 12.357 - 12.406: 67.5114% ( 884) 00:08:00.186 12.406 - 12.455: 73.1015% ( 664) 00:08:00.186 12.455 - 12.505: 77.2773% ( 496) 00:08:00.186 12.505 - 12.554: 80.2576% ( 354) 00:08:00.186 12.554 - 12.603: 82.9854% ( 324) 00:08:00.186 12.603 - 12.702: 87.1948% ( 500) 00:08:00.186 12.702 - 12.800: 90.0993% ( 345) 00:08:00.187 12.800 - 12.898: 92.0694% ( 234) 00:08:00.187 12.898 - 12.997: 93.2059% ( 135) 00:08:00.187 12.997 - 13.095: 93.8963% ( 82) 00:08:00.187 13.095 - 13.194: 94.1825% ( 34) 00:08:00.187 13.194 - 13.292: 94.3930% ( 25) 00:08:00.187 13.292 - 13.391: 94.4688% ( 9) 00:08:00.187 13.391 - 13.489: 94.5024% ( 4) 00:08:00.187 13.489 - 13.588: 94.5277% ( 3) 00:08:00.187 13.588 - 13.686: 94.6035% ( 9) 00:08:00.187 13.686 - 13.785: 94.6287% ( 3) 00:08:00.187 13.785 - 13.883: 94.6792% ( 6) 00:08:00.187 13.883 - 13.982: 94.6961% ( 2) 00:08:00.187 13.982 - 14.080: 94.7382% ( 5) 00:08:00.187 14.080 - 14.178: 94.7887% ( 6) 00:08:00.187 14.178 - 14.277: 94.8645% ( 9) 00:08:00.187 14.277 - 14.375: 94.9739% ( 13) 00:08:00.187 14.375 - 14.474: 95.1170% ( 17) 00:08:00.187 14.474 - 14.572: 95.2854% ( 20) 00:08:00.187 14.572 - 14.671: 95.4285% ( 17) 00:08:00.187 14.671 - 14.769: 95.6053% ( 21) 00:08:00.187 14.769 - 14.868: 95.7484% ( 17) 00:08:00.187 14.868 - 14.966: 95.9252% ( 21) 00:08:00.187 14.966 - 15.065: 96.0263% ( 12) 00:08:00.187 15.065 - 15.163: 96.0936% ( 8) 00:08:00.187 15.163 - 15.262: 96.1273% ( 4) 00:08:00.187 15.262 - 15.360: 96.1778% ( 6) 00:08:00.187 15.360 - 15.458: 96.1946% ( 2) 00:08:00.187 15.458 - 15.557: 96.2283% ( 4) 00:08:00.187 15.557 - 15.655: 96.2452% ( 2) 00:08:00.187 15.754 - 15.852: 96.2536% ( 1) 00:08:00.187 15.852 - 15.951: 96.2704% ( 2) 00:08:00.187 15.951 - 16.049: 96.3209% ( 6) 00:08:00.187 16.049 - 16.148: 96.3378% ( 2) 00:08:00.187 16.148 - 16.246: 96.3546% ( 2) 00:08:00.187 16.246 - 16.345: 96.3714% ( 2) 00:08:00.187 16.345 - 16.443: 96.3967% ( 3) 00:08:00.187 16.443 - 16.542: 96.4304% ( 4) 00:08:00.187 16.542 - 16.640: 96.4641% ( 4) 00:08:00.187 16.640 - 16.738: 96.4725% ( 1) 00:08:00.187 16.738 - 16.837: 96.5146% ( 5) 00:08:00.187 16.837 - 16.935: 96.5314% ( 2) 00:08:00.187 16.935 - 17.034: 96.5651% ( 4) 00:08:00.187 17.034 - 17.132: 96.5819% ( 2) 00:08:00.187 17.132 - 17.231: 96.6240% ( 5) 00:08:00.187 17.231 - 17.329: 96.6408% ( 2) 00:08:00.187 17.329 - 17.428: 96.6493% ( 1) 00:08:00.187 17.428 - 17.526: 96.6577% ( 1) 00:08:00.187 17.526 - 17.625: 96.7082% ( 6) 00:08:00.187 17.822 - 17.920: 96.7503% ( 5) 00:08:00.187 17.920 - 18.018: 96.7756% ( 3) 00:08:00.187 18.018 - 18.117: 96.8345% ( 7) 00:08:00.187 18.117 - 18.215: 96.8934% ( 7) 00:08:00.187 18.215 - 18.314: 96.9439% ( 6) 00:08:00.187 18.314 - 18.412: 96.9944% ( 6) 00:08:00.187 18.412 - 18.511: 97.1207% ( 15) 00:08:00.187 18.511 - 18.609: 97.2470% ( 15) 00:08:00.187 18.609 - 18.708: 97.3228% ( 9) 00:08:00.187 18.708 - 18.806: 97.4070% ( 10) 00:08:00.187 18.806 - 18.905: 97.4912% ( 10) 00:08:00.187 18.905 - 19.003: 97.5922% ( 12) 00:08:00.187 19.003 - 19.102: 97.6680% ( 9) 00:08:00.187 19.102 - 19.200: 97.6932% ( 3) 00:08:00.187 19.200 - 19.298: 97.7690% ( 9) 00:08:00.187 19.298 - 19.397: 97.7942% ( 3) 00:08:00.187 19.397 - 19.495: 97.8448% ( 6) 00:08:00.187 19.495 - 19.594: 97.8616% ( 2) 00:08:00.187 19.594 - 19.692: 97.8784% ( 2) 00:08:00.187 19.692 - 19.791: 97.9037% ( 3) 00:08:00.187 19.791 - 19.889: 97.9205% ( 2) 00:08:00.187 19.889 - 19.988: 97.9542% ( 4) 00:08:00.187 19.988 - 20.086: 97.9795% ( 3) 00:08:00.187 20.086 - 20.185: 98.0047% ( 3) 00:08:00.187 20.382 - 20.480: 98.0384% ( 4) 00:08:00.187 20.480 - 20.578: 98.0552% ( 2) 00:08:00.187 20.578 - 20.677: 98.0721% ( 2) 00:08:00.187 20.677 - 20.775: 98.0889% ( 2) 00:08:00.187 20.775 - 20.874: 98.1057% ( 2) 00:08:00.187 20.972 - 21.071: 98.1142% ( 1) 00:08:00.187 21.071 - 21.169: 98.1226% ( 1) 00:08:00.187 21.465 - 21.563: 98.1310% ( 1) 00:08:00.187 21.662 - 21.760: 98.1394% ( 1) 00:08:00.187 22.154 - 22.252: 98.1478% ( 1) 00:08:00.187 22.449 - 22.548: 98.1563% ( 1) 00:08:00.187 22.548 - 22.646: 98.1731% ( 2) 00:08:00.187 22.745 - 22.843: 98.1815% ( 1) 00:08:00.187 23.631 - 23.729: 98.2068% ( 3) 00:08:00.187 23.828 - 23.926: 98.2152% ( 1) 00:08:00.187 27.175 - 27.372: 98.2320% ( 2) 00:08:00.187 27.569 - 27.766: 98.2404% ( 1) 00:08:00.187 29.735 - 29.932: 98.2489% ( 1) 00:08:00.187 30.720 - 30.917: 98.2741% ( 3) 00:08:00.187 30.917 - 31.114: 98.4678% ( 23) 00:08:00.187 31.114 - 31.311: 98.9813% ( 61) 00:08:00.187 31.311 - 31.508: 99.4191% ( 52) 00:08:00.187 31.508 - 31.705: 99.6380% ( 26) 00:08:00.187 31.705 - 31.902: 99.7306% ( 11) 00:08:00.187 31.902 - 32.098: 99.7474% ( 2) 00:08:00.187 32.098 - 32.295: 99.7643% ( 2) 00:08:00.187 32.689 - 32.886: 99.7727% ( 1) 00:08:00.187 33.871 - 34.068: 99.7811% ( 1) 00:08:00.187 35.643 - 35.840: 99.7895% ( 1) 00:08:00.187 36.628 - 36.825: 99.7979% ( 1) 00:08:00.187 37.415 - 37.612: 99.8064% ( 1) 00:08:00.187 38.991 - 39.188: 99.8148% ( 1) 00:08:00.187 41.157 - 41.354: 99.8232% ( 1) 00:08:00.187 42.732 - 42.929: 99.8400% ( 2) 00:08:00.187 43.323 - 43.520: 99.8485% ( 1) 00:08:00.187 45.883 - 46.080: 99.8569% ( 1) 00:08:00.187 46.474 - 46.671: 99.8653% ( 1) 00:08:00.187 47.262 - 47.458: 99.8737% ( 1) 00:08:00.187 48.049 - 48.246: 99.8821% ( 1) 00:08:00.187 49.034 - 49.231: 99.8906% ( 1) 00:08:00.187 49.428 - 49.625: 99.8990% ( 1) 00:08:00.187 51.594 - 51.988: 99.9074% ( 1) 00:08:00.187 55.926 - 56.320: 99.9158% ( 1) 00:08:00.187 56.714 - 57.108: 99.9242% ( 1) 00:08:00.187 59.471 - 59.865: 99.9326% ( 1) 00:08:00.187 60.258 - 60.652: 99.9411% ( 1) 00:08:00.187 65.378 - 65.772: 99.9495% ( 1) 00:08:00.187 67.348 - 67.742: 99.9579% ( 1) 00:08:00.187 80.738 - 81.132: 99.9663% ( 1) 00:08:00.187 97.674 - 98.068: 99.9747% ( 1) 00:08:00.187 174.080 - 174.868: 99.9832% ( 1) 00:08:00.187 261.514 - 263.089: 99.9916% ( 1) 00:08:00.187 304.049 - 305.625: 100.0000% ( 1) 00:08:00.187 00:08:00.187 Complete histogram 00:08:00.187 ================== 00:08:00.187 Range in us Cumulative Count 00:08:00.187 7.729 - 7.778: 0.1010% ( 12) 00:08:00.187 7.778 - 7.828: 1.0860% ( 117) 00:08:00.187 7.828 - 7.877: 5.5481% ( 530) 00:08:00.187 7.877 - 7.926: 15.7939% ( 1217) 00:08:00.187 7.926 - 7.975: 28.9106% ( 1558) 00:08:00.187 7.975 - 8.025: 40.7813% ( 1410) 00:08:00.187 8.025 - 8.074: 51.8690% ( 1317) 00:08:00.187 8.074 - 8.123: 62.6200% ( 1277) 00:08:00.187 8.123 - 8.172: 72.8490% ( 1215) 00:08:00.187 8.172 - 8.222: 80.4344% ( 901) 00:08:00.187 8.222 - 8.271: 85.8646% ( 645) 00:08:00.187 8.271 - 8.320: 89.4679% ( 428) 00:08:00.187 8.320 - 8.369: 91.4548% ( 236) 00:08:00.187 8.369 - 8.418: 92.9449% ( 177) 00:08:00.187 8.418 - 8.468: 93.7868% ( 100) 00:08:00.187 8.468 - 8.517: 94.2751% ( 58) 00:08:00.187 8.517 - 8.566: 94.6540% ( 45) 00:08:00.187 8.566 - 8.615: 94.8476% ( 23) 00:08:00.187 8.615 - 8.665: 95.0749% ( 27) 00:08:00.187 8.665 - 8.714: 95.2096% ( 16) 00:08:00.187 8.714 - 8.763: 95.3528% ( 17) 00:08:00.187 8.763 - 8.812: 95.4454% ( 11) 00:08:00.187 8.812 - 8.862: 95.5380% ( 11) 00:08:00.187 8.862 - 8.911: 95.5632% ( 3) 00:08:00.187 8.911 - 8.960: 95.6558% ( 11) 00:08:00.187 9.009 - 9.058: 95.6895% ( 4) 00:08:00.187 9.058 - 9.108: 95.7063% ( 2) 00:08:00.187 9.108 - 9.157: 95.7400% ( 4) 00:08:00.187 9.157 - 9.206: 95.7653% ( 3) 00:08:00.187 9.206 - 9.255: 95.7905% ( 3) 00:08:00.187 9.255 - 9.305: 95.8242% ( 4) 00:08:00.187 9.305 - 9.354: 95.8326% ( 1) 00:08:00.187 9.354 - 9.403: 95.8411% ( 1) 00:08:00.187 9.403 - 9.452: 95.8579% ( 2) 00:08:00.187 9.502 - 9.551: 95.8663% ( 1) 00:08:00.187 9.600 - 9.649: 95.8831% ( 2) 00:08:00.187 9.649 - 9.698: 95.8916% ( 1) 00:08:00.187 9.748 - 9.797: 95.9000% ( 1) 00:08:00.187 9.895 - 9.945: 95.9084% ( 1) 00:08:00.187 10.043 - 10.092: 95.9168% ( 1) 00:08:00.187 10.092 - 10.142: 95.9252% ( 1) 00:08:00.187 10.191 - 10.240: 95.9421% ( 2) 00:08:00.187 10.240 - 10.289: 95.9505% ( 1) 00:08:00.187 10.289 - 10.338: 95.9589% ( 1) 00:08:00.187 10.437 - 10.486: 95.9673% ( 1) 00:08:00.187 10.486 - 10.535: 95.9758% ( 1) 00:08:00.187 10.535 - 10.585: 95.9926% ( 2) 00:08:00.187 10.585 - 10.634: 96.0010% ( 1) 00:08:00.187 10.683 - 10.732: 96.0263% ( 3) 00:08:00.187 10.732 - 10.782: 96.0347% ( 1) 00:08:00.187 10.831 - 10.880: 96.0431% ( 1) 00:08:00.187 10.929 - 10.978: 96.0599% ( 2) 00:08:00.187 10.978 - 11.028: 96.0768% ( 2) 00:08:00.187 11.028 - 11.077: 96.1020% ( 3) 00:08:00.187 11.077 - 11.126: 96.1357% ( 4) 00:08:00.187 11.126 - 11.175: 96.1526% ( 2) 00:08:00.187 11.175 - 11.225: 96.1694% ( 2) 00:08:00.187 11.225 - 11.274: 96.1862% ( 2) 00:08:00.187 11.274 - 11.323: 96.2115% ( 3) 00:08:00.187 11.323 - 11.372: 96.2199% ( 1) 00:08:00.187 11.372 - 11.422: 96.2367% ( 2) 00:08:00.187 11.422 - 11.471: 96.2536% ( 2) 00:08:00.188 11.471 - 11.520: 96.2620% ( 1) 00:08:00.188 11.520 - 11.569: 96.2704% ( 1) 00:08:00.188 11.569 - 11.618: 96.2873% ( 2) 00:08:00.188 11.618 - 11.668: 96.3293% ( 5) 00:08:00.188 11.668 - 11.717: 96.3462% ( 2) 00:08:00.188 11.717 - 11.766: 96.3714% ( 3) 00:08:00.188 11.815 - 11.865: 96.3883% ( 2) 00:08:00.188 11.865 - 11.914: 96.4388% ( 6) 00:08:00.188 11.914 - 11.963: 96.4472% ( 1) 00:08:00.188 11.963 - 12.012: 96.4556% ( 1) 00:08:00.188 12.012 - 12.062: 96.4641% ( 1) 00:08:00.188 12.062 - 12.111: 96.4809% ( 2) 00:08:00.188 12.111 - 12.160: 96.4977% ( 2) 00:08:00.188 12.209 - 12.258: 96.5146% ( 2) 00:08:00.188 12.357 - 12.406: 96.5230% ( 1) 00:08:00.188 12.505 - 12.554: 96.5314% ( 1) 00:08:00.188 12.702 - 12.800: 96.5398% ( 1) 00:08:00.188 12.800 - 12.898: 96.5567% ( 2) 00:08:00.188 12.898 - 12.997: 96.5819% ( 3) 00:08:00.188 12.997 - 13.095: 96.5903% ( 1) 00:08:00.188 13.095 - 13.194: 96.5988% ( 1) 00:08:00.188 13.489 - 13.588: 96.6240% ( 3) 00:08:00.188 13.588 - 13.686: 96.6408% ( 2) 00:08:00.188 13.686 - 13.785: 96.6829% ( 5) 00:08:00.188 13.785 - 13.883: 96.6914% ( 1) 00:08:00.188 13.883 - 13.982: 96.7166% ( 3) 00:08:00.188 13.982 - 14.080: 96.7503% ( 4) 00:08:00.188 14.080 - 14.178: 96.8345% ( 10) 00:08:00.188 14.178 - 14.277: 96.9103% ( 9) 00:08:00.188 14.277 - 14.375: 97.0113% ( 12) 00:08:00.188 14.375 - 14.474: 97.1376% ( 15) 00:08:00.188 14.474 - 14.572: 97.1712% ( 4) 00:08:00.188 14.572 - 14.671: 97.2386% ( 8) 00:08:00.188 14.671 - 14.769: 97.3649% ( 15) 00:08:00.188 14.769 - 14.868: 97.4743% ( 13) 00:08:00.188 14.868 - 14.966: 97.5585% ( 10) 00:08:00.188 14.966 - 15.065: 97.6006% ( 5) 00:08:00.188 15.065 - 15.163: 97.6848% ( 10) 00:08:00.188 15.163 - 15.262: 97.7269% ( 5) 00:08:00.188 15.262 - 15.360: 97.7774% ( 6) 00:08:00.188 15.360 - 15.458: 97.8111% ( 4) 00:08:00.188 15.458 - 15.557: 97.8532% ( 5) 00:08:00.188 15.557 - 15.655: 97.9205% ( 8) 00:08:00.188 15.655 - 15.754: 97.9458% ( 3) 00:08:00.188 15.754 - 15.852: 97.9879% ( 5) 00:08:00.188 15.852 - 15.951: 98.0131% ( 3) 00:08:00.188 15.951 - 16.049: 98.0216% ( 1) 00:08:00.188 16.049 - 16.148: 98.0384% ( 2) 00:08:00.188 16.345 - 16.443: 98.0468% ( 1) 00:08:00.188 16.443 - 16.542: 98.0552% ( 1) 00:08:00.188 16.640 - 16.738: 98.0805% ( 3) 00:08:00.188 16.935 - 17.034: 98.0889% ( 1) 00:08:00.188 17.034 - 17.132: 98.0973% ( 1) 00:08:00.188 17.231 - 17.329: 98.1057% ( 1) 00:08:00.188 17.428 - 17.526: 98.1142% ( 1) 00:08:00.188 18.018 - 18.117: 98.1226% ( 1) 00:08:00.188 18.511 - 18.609: 98.1394% ( 2) 00:08:00.188 18.708 - 18.806: 98.1563% ( 2) 00:08:00.188 18.806 - 18.905: 98.1647% ( 1) 00:08:00.188 19.298 - 19.397: 98.1815% ( 2) 00:08:00.188 19.594 - 19.692: 98.1899% ( 1) 00:08:00.188 20.086 - 20.185: 98.1983% ( 1) 00:08:00.188 20.382 - 20.480: 98.2068% ( 1) 00:08:00.188 20.480 - 20.578: 98.2152% ( 1) 00:08:00.188 20.677 - 20.775: 98.2236% ( 1) 00:08:00.188 20.775 - 20.874: 98.2320% ( 1) 00:08:00.188 21.563 - 21.662: 98.2404% ( 1) 00:08:00.188 21.760 - 21.858: 98.2489% ( 1) 00:08:00.188 22.154 - 22.252: 98.2573% ( 1) 00:08:00.188 22.252 - 22.351: 98.2825% ( 3) 00:08:00.188 22.351 - 22.449: 98.3667% ( 10) 00:08:00.188 22.449 - 22.548: 98.6951% ( 39) 00:08:00.188 22.548 - 22.646: 98.9055% ( 25) 00:08:00.188 22.646 - 22.745: 99.2255% ( 38) 00:08:00.188 22.745 - 22.843: 99.4612% ( 28) 00:08:00.188 22.843 - 22.942: 99.5875% ( 15) 00:08:00.188 22.942 - 23.040: 99.7053% ( 14) 00:08:00.188 23.040 - 23.138: 99.7138% ( 1) 00:08:00.188 23.138 - 23.237: 99.7390% ( 3) 00:08:00.188 23.237 - 23.335: 99.7474% ( 1) 00:08:00.188 23.335 - 23.434: 99.7643% ( 2) 00:08:00.188 23.434 - 23.532: 99.7727% ( 1) 00:08:00.188 23.532 - 23.631: 99.7811% ( 1) 00:08:00.188 23.631 - 23.729: 99.7895% ( 1) 00:08:00.188 23.729 - 23.828: 99.7979% ( 1) 00:08:00.188 24.025 - 24.123: 99.8148% ( 2) 00:08:00.188 24.123 - 24.222: 99.8232% ( 1) 00:08:00.188 25.009 - 25.108: 99.8316% ( 1) 00:08:00.188 25.206 - 25.403: 99.8400% ( 1) 00:08:00.188 27.372 - 27.569: 99.8485% ( 1) 00:08:00.188 33.871 - 34.068: 99.8653% ( 2) 00:08:00.188 34.068 - 34.265: 99.8737% ( 1) 00:08:00.188 38.006 - 38.203: 99.8821% ( 1) 00:08:00.188 39.385 - 39.582: 99.8906% ( 1) 00:08:00.188 41.157 - 41.354: 99.8990% ( 1) 00:08:00.188 41.551 - 41.748: 99.9074% ( 1) 00:08:00.188 47.458 - 47.655: 99.9158% ( 1) 00:08:00.188 48.049 - 48.246: 99.9242% ( 1) 00:08:00.188 53.957 - 54.351: 99.9326% ( 1) 00:08:00.188 61.046 - 61.440: 99.9411% ( 1) 00:08:00.188 62.622 - 63.015: 99.9579% ( 2) 00:08:00.188 63.409 - 63.803: 99.9663% ( 1) 00:08:00.188 66.166 - 66.560: 99.9747% ( 1) 00:08:00.188 88.222 - 88.615: 99.9832% ( 1) 00:08:00.188 102.400 - 103.188: 99.9916% ( 1) 00:08:00.188 241.034 - 242.609: 100.0000% ( 1) 00:08:00.188 00:08:00.188 ************************************ 00:08:00.188 END TEST nvme_overhead 00:08:00.188 ************************************ 00:08:00.188 00:08:00.188 real 0m1.227s 00:08:00.188 user 0m1.073s 00:08:00.188 sys 0m0.104s 00:08:00.188 02:18:46 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.188 02:18:46 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:00.188 02:18:47 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:00.188 02:18:47 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:00.188 02:18:47 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.188 02:18:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.188 ************************************ 00:08:00.188 START TEST nvme_arbitration 00:08:00.188 ************************************ 00:08:00.188 02:18:47 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:03.514 Initializing NVMe Controllers 00:08:03.514 Attached to 0000:00:10.0 00:08:03.514 Attached to 0000:00:11.0 00:08:03.514 Attached to 0000:00:13.0 00:08:03.514 Attached to 0000:00:12.0 00:08:03.514 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:03.514 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:03.514 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:03.514 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:03.514 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:03.514 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:03.514 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:03.514 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:03.514 Initialization complete. Launching workers. 00:08:03.514 Starting thread on core 1 with urgent priority queue 00:08:03.514 Starting thread on core 2 with urgent priority queue 00:08:03.514 Starting thread on core 3 with urgent priority queue 00:08:03.514 Starting thread on core 0 with urgent priority queue 00:08:03.514 QEMU NVMe Ctrl (12340 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:08:03.514 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:08:03.514 QEMU NVMe Ctrl (12341 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:08:03.514 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:08:03.514 QEMU NVMe Ctrl (12343 ) core 2: 896.00 IO/s 111.61 secs/100000 ios 00:08:03.514 QEMU NVMe Ctrl (12342 ) core 3: 768.00 IO/s 130.21 secs/100000 ios 00:08:03.514 ======================================================== 00:08:03.514 00:08:03.514 00:08:03.514 real 0m3.290s 00:08:03.514 user 0m9.229s 00:08:03.514 sys 0m0.101s 00:08:03.514 ************************************ 00:08:03.514 END TEST nvme_arbitration 00:08:03.514 ************************************ 00:08:03.514 02:18:50 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.514 02:18:50 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:03.514 02:18:50 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:03.514 02:18:50 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:03.515 02:18:50 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.515 02:18:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.515 ************************************ 00:08:03.515 START TEST nvme_single_aen 00:08:03.515 ************************************ 00:08:03.515 02:18:50 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:03.515 Asynchronous Event Request test 00:08:03.515 Attached to 0000:00:10.0 00:08:03.515 Attached to 0000:00:11.0 00:08:03.515 Attached to 0000:00:13.0 00:08:03.515 Attached to 0000:00:12.0 00:08:03.515 Reset controller to setup AER completions for this process 00:08:03.515 Registering asynchronous event callbacks... 00:08:03.515 Getting orig temperature thresholds of all controllers 00:08:03.515 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:03.515 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:03.515 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:03.515 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:03.515 Setting all controllers temperature threshold low to trigger AER 00:08:03.515 Waiting for all controllers temperature threshold to be set lower 00:08:03.515 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:03.515 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:03.515 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:03.515 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:03.515 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:03.515 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:03.515 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:03.515 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:03.515 Waiting for all controllers to trigger AER and reset threshold 00:08:03.515 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.515 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.515 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.515 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.515 Cleaning up... 00:08:03.515 00:08:03.515 real 0m0.194s 00:08:03.515 user 0m0.074s 00:08:03.515 sys 0m0.087s 00:08:03.515 02:18:50 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.515 ************************************ 00:08:03.515 END TEST nvme_single_aen 00:08:03.515 ************************************ 00:08:03.515 02:18:50 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:03.515 02:18:50 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:03.515 02:18:50 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:03.515 02:18:50 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.515 02:18:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.515 ************************************ 00:08:03.515 START TEST nvme_doorbell_aers 00:08:03.515 ************************************ 00:08:03.515 02:18:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:08:03.515 02:18:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:03.515 02:18:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:03.515 02:18:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:03.515 02:18:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:03.515 02:18:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:03.515 02:18:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:03.515 02:18:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:03.515 02:18:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:03.515 02:18:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:03.773 02:18:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:03.773 02:18:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:03.774 02:18:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:03.774 02:18:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:03.774 [2024-11-04 02:18:50.866810] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:13.762 Executing: test_write_invalid_db 00:08:13.762 Waiting for AER completion... 00:08:13.762 Failure: test_write_invalid_db 00:08:13.762 00:08:13.763 Executing: test_invalid_db_write_overflow_sq 00:08:13.763 Waiting for AER completion... 00:08:13.763 Failure: test_invalid_db_write_overflow_sq 00:08:13.763 00:08:13.763 Executing: test_invalid_db_write_overflow_cq 00:08:13.763 Waiting for AER completion... 00:08:13.763 Failure: test_invalid_db_write_overflow_cq 00:08:13.763 00:08:13.763 02:19:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:13.763 02:19:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:14.021 [2024-11-04 02:19:00.919412] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:24.019 Executing: test_write_invalid_db 00:08:24.019 Waiting for AER completion... 00:08:24.019 Failure: test_write_invalid_db 00:08:24.019 00:08:24.019 Executing: test_invalid_db_write_overflow_sq 00:08:24.019 Waiting for AER completion... 00:08:24.019 Failure: test_invalid_db_write_overflow_sq 00:08:24.019 00:08:24.019 Executing: test_invalid_db_write_overflow_cq 00:08:24.019 Waiting for AER completion... 00:08:24.019 Failure: test_invalid_db_write_overflow_cq 00:08:24.019 00:08:24.019 02:19:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:24.019 02:19:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:24.019 [2024-11-04 02:19:10.933884] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:33.989 Executing: test_write_invalid_db 00:08:33.989 Waiting for AER completion... 00:08:33.989 Failure: test_write_invalid_db 00:08:33.989 00:08:33.989 Executing: test_invalid_db_write_overflow_sq 00:08:33.989 Waiting for AER completion... 00:08:33.989 Failure: test_invalid_db_write_overflow_sq 00:08:33.989 00:08:33.990 Executing: test_invalid_db_write_overflow_cq 00:08:33.990 Waiting for AER completion... 00:08:33.990 Failure: test_invalid_db_write_overflow_cq 00:08:33.990 00:08:33.990 02:19:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:33.990 02:19:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:33.990 [2024-11-04 02:19:20.981903] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.980 Executing: test_write_invalid_db 00:08:43.980 Waiting for AER completion... 00:08:43.980 Failure: test_write_invalid_db 00:08:43.980 00:08:43.980 Executing: test_invalid_db_write_overflow_sq 00:08:43.980 Waiting for AER completion... 00:08:43.980 Failure: test_invalid_db_write_overflow_sq 00:08:43.980 00:08:43.980 Executing: test_invalid_db_write_overflow_cq 00:08:43.980 Waiting for AER completion... 00:08:43.981 Failure: test_invalid_db_write_overflow_cq 00:08:43.981 00:08:43.981 00:08:43.981 real 0m40.197s 00:08:43.981 user 0m34.136s 00:08:43.981 sys 0m5.689s 00:08:43.981 02:19:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:43.981 02:19:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:43.981 ************************************ 00:08:43.981 END TEST nvme_doorbell_aers 00:08:43.981 ************************************ 00:08:43.981 02:19:30 nvme -- nvme/nvme.sh@97 -- # uname 00:08:43.981 02:19:30 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:43.981 02:19:30 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:43.981 02:19:30 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:43.981 02:19:30 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:43.981 02:19:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.981 ************************************ 00:08:43.981 START TEST nvme_multi_aen 00:08:43.981 ************************************ 00:08:43.981 02:19:30 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:43.981 [2024-11-04 02:19:31.012082] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 [2024-11-04 02:19:31.012246] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 [2024-11-04 02:19:31.012259] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 [2024-11-04 02:19:31.013531] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 [2024-11-04 02:19:31.013555] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 [2024-11-04 02:19:31.013563] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 [2024-11-04 02:19:31.014595] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 [2024-11-04 02:19:31.014616] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 [2024-11-04 02:19:31.014623] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 [2024-11-04 02:19:31.015629] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 [2024-11-04 02:19:31.015654] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 [2024-11-04 02:19:31.015661] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63121) is not found. Dropping the request. 00:08:43.981 Child process pid: 63649 00:08:44.258 [Child] Asynchronous Event Request test 00:08:44.258 [Child] Attached to 0000:00:10.0 00:08:44.258 [Child] Attached to 0000:00:11.0 00:08:44.258 [Child] Attached to 0000:00:13.0 00:08:44.258 [Child] Attached to 0000:00:12.0 00:08:44.258 [Child] Registering asynchronous event callbacks... 00:08:44.258 [Child] Getting orig temperature thresholds of all controllers 00:08:44.258 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.258 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.258 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.258 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.258 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:44.258 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.258 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.258 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.258 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.258 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.258 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.258 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.258 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.258 [Child] Cleaning up... 00:08:44.258 Asynchronous Event Request test 00:08:44.258 Attached to 0000:00:10.0 00:08:44.258 Attached to 0000:00:11.0 00:08:44.258 Attached to 0000:00:13.0 00:08:44.258 Attached to 0000:00:12.0 00:08:44.258 Reset controller to setup AER completions for this process 00:08:44.258 Registering asynchronous event callbacks... 00:08:44.258 Getting orig temperature thresholds of all controllers 00:08:44.258 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.258 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.258 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.258 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.258 Setting all controllers temperature threshold low to trigger AER 00:08:44.258 Waiting for all controllers temperature threshold to be set lower 00:08:44.258 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.258 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:44.258 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.258 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:44.258 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.258 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:44.258 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.258 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:44.258 Waiting for all controllers to trigger AER and reset threshold 00:08:44.258 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.258 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.258 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.258 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.258 Cleaning up... 00:08:44.258 00:08:44.258 real 0m0.428s 00:08:44.258 user 0m0.144s 00:08:44.258 sys 0m0.180s 00:08:44.258 02:19:31 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.258 02:19:31 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:44.258 ************************************ 00:08:44.258 END TEST nvme_multi_aen 00:08:44.258 ************************************ 00:08:44.258 02:19:31 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:44.258 02:19:31 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:44.258 02:19:31 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.258 02:19:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.258 ************************************ 00:08:44.258 START TEST nvme_startup 00:08:44.258 ************************************ 00:08:44.258 02:19:31 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:44.523 Initializing NVMe Controllers 00:08:44.523 Attached to 0000:00:10.0 00:08:44.523 Attached to 0000:00:11.0 00:08:44.523 Attached to 0000:00:13.0 00:08:44.523 Attached to 0000:00:12.0 00:08:44.523 Initialization complete. 00:08:44.523 Time used:146575.125 (us). 00:08:44.523 00:08:44.523 real 0m0.208s 00:08:44.523 user 0m0.071s 00:08:44.523 sys 0m0.091s 00:08:44.523 02:19:31 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.523 02:19:31 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:44.523 ************************************ 00:08:44.523 END TEST nvme_startup 00:08:44.523 ************************************ 00:08:44.523 02:19:31 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:44.523 02:19:31 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:44.523 02:19:31 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.523 02:19:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.523 ************************************ 00:08:44.523 START TEST nvme_multi_secondary 00:08:44.523 ************************************ 00:08:44.523 02:19:31 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:08:44.523 02:19:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63699 00:08:44.523 02:19:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:44.523 02:19:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63700 00:08:44.523 02:19:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:44.523 02:19:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:47.813 Initializing NVMe Controllers 00:08:47.813 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.813 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.813 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:47.813 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.813 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:47.813 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:47.813 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:47.813 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:47.813 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:47.813 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:47.813 Initialization complete. Launching workers. 00:08:47.813 ======================================================== 00:08:47.813 Latency(us) 00:08:47.813 Device Information : IOPS MiB/s Average min max 00:08:47.813 PCIE (0000:00:10.0) NSID 1 from core 1: 5753.34 22.47 2779.61 725.36 13045.09 00:08:47.813 PCIE (0000:00:11.0) NSID 1 from core 1: 5753.34 22.47 2781.59 744.52 16459.14 00:08:47.813 PCIE (0000:00:13.0) NSID 1 from core 1: 5753.34 22.47 2782.56 718.88 17957.98 00:08:47.813 PCIE (0000:00:12.0) NSID 1 from core 1: 5753.34 22.47 2783.13 724.47 16869.60 00:08:47.813 PCIE (0000:00:12.0) NSID 2 from core 1: 5753.34 22.47 2785.58 727.45 13922.13 00:08:47.813 PCIE (0000:00:12.0) NSID 3 from core 1: 5753.34 22.47 2786.09 727.17 13839.05 00:08:47.813 ======================================================== 00:08:47.813 Total : 34520.01 134.84 2783.09 718.88 17957.98 00:08:47.813 00:08:48.074 Initializing NVMe Controllers 00:08:48.074 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:48.074 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:48.074 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.074 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:48.074 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:48.074 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:48.074 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:48.074 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:48.074 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:48.074 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:48.074 Initialization complete. Launching workers. 00:08:48.074 ======================================================== 00:08:48.074 Latency(us) 00:08:48.074 Device Information : IOPS MiB/s Average min max 00:08:48.074 PCIE (0000:00:10.0) NSID 1 from core 2: 2021.56 7.90 7913.32 1239.76 39696.56 00:08:48.074 PCIE (0000:00:11.0) NSID 1 from core 2: 2021.56 7.90 7915.95 1296.63 35002.25 00:08:48.074 PCIE (0000:00:13.0) NSID 1 from core 2: 2021.56 7.90 7915.81 1285.88 38807.78 00:08:48.074 PCIE (0000:00:12.0) NSID 1 from core 2: 2021.56 7.90 7915.65 1218.06 37515.15 00:08:48.074 PCIE (0000:00:12.0) NSID 2 from core 2: 2021.56 7.90 7916.14 1291.66 34348.65 00:08:48.074 PCIE (0000:00:12.0) NSID 3 from core 2: 2021.56 7.90 7916.02 1400.84 36657.80 00:08:48.074 ======================================================== 00:08:48.074 Total : 12129.37 47.38 7915.48 1218.06 39696.56 00:08:48.074 00:08:48.074 02:19:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63699 00:08:49.980 Initializing NVMe Controllers 00:08:49.980 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.980 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.980 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.980 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.980 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:49.980 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:49.980 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:49.980 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:49.980 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:49.980 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:49.980 Initialization complete. Launching workers. 00:08:49.980 ======================================================== 00:08:49.980 Latency(us) 00:08:49.980 Device Information : IOPS MiB/s Average min max 00:08:49.980 PCIE (0000:00:10.0) NSID 1 from core 0: 6273.99 24.51 2548.77 686.41 16551.47 00:08:49.980 PCIE (0000:00:11.0) NSID 1 from core 0: 6273.99 24.51 2549.94 691.22 15355.75 00:08:49.980 PCIE (0000:00:13.0) NSID 1 from core 0: 6273.99 24.51 2549.90 695.55 15953.76 00:08:49.980 PCIE (0000:00:12.0) NSID 1 from core 0: 6273.99 24.51 2549.87 701.14 14007.55 00:08:49.980 PCIE (0000:00:12.0) NSID 2 from core 0: 6273.99 24.51 2549.83 700.11 14443.85 00:08:49.980 PCIE (0000:00:12.0) NSID 3 from core 0: 6273.99 24.51 2549.81 697.44 15453.14 00:08:49.980 ======================================================== 00:08:49.980 Total : 37643.91 147.05 2549.69 686.41 16551.47 00:08:49.980 00:08:49.980 02:19:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63700 00:08:49.980 02:19:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63769 00:08:49.980 02:19:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:49.980 02:19:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63770 00:08:49.980 02:19:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:49.980 02:19:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:53.284 Initializing NVMe Controllers 00:08:53.284 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.284 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.284 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.284 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.284 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:53.284 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:53.284 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:53.284 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:53.284 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:53.284 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:53.284 Initialization complete. Launching workers. 00:08:53.284 ======================================================== 00:08:53.284 Latency(us) 00:08:53.284 Device Information : IOPS MiB/s Average min max 00:08:53.284 PCIE (0000:00:10.0) NSID 1 from core 0: 3544.55 13.85 4512.24 919.13 13404.96 00:08:53.284 PCIE (0000:00:11.0) NSID 1 from core 0: 3544.55 13.85 4514.26 904.18 15822.32 00:08:53.284 PCIE (0000:00:13.0) NSID 1 from core 0: 3544.55 13.85 4514.70 819.67 15266.29 00:08:53.284 PCIE (0000:00:12.0) NSID 1 from core 0: 3544.55 13.85 4515.64 819.66 14708.31 00:08:53.284 PCIE (0000:00:12.0) NSID 2 from core 0: 3544.55 13.85 4516.97 934.59 13727.81 00:08:53.284 PCIE (0000:00:12.0) NSID 3 from core 0: 3544.55 13.85 4518.83 916.91 13727.82 00:08:53.284 ======================================================== 00:08:53.285 Total : 21267.32 83.08 4515.44 819.66 15822.32 00:08:53.285 00:08:53.285 Initializing NVMe Controllers 00:08:53.285 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.285 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.285 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.285 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.285 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:53.285 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:53.285 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:53.285 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:53.285 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:53.285 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:53.285 Initialization complete. Launching workers. 00:08:53.285 ======================================================== 00:08:53.285 Latency(us) 00:08:53.285 Device Information : IOPS MiB/s Average min max 00:08:53.285 PCIE (0000:00:10.0) NSID 1 from core 1: 3385.53 13.22 4724.16 909.55 13635.59 00:08:53.285 PCIE (0000:00:11.0) NSID 1 from core 1: 3385.53 13.22 4726.86 774.18 13904.53 00:08:53.285 PCIE (0000:00:13.0) NSID 1 from core 1: 3385.53 13.22 4726.82 856.86 13734.53 00:08:53.285 PCIE (0000:00:12.0) NSID 1 from core 1: 3385.53 13.22 4727.53 951.28 14201.76 00:08:53.285 PCIE (0000:00:12.0) NSID 2 from core 1: 3385.53 13.22 4729.09 954.52 13584.92 00:08:53.285 PCIE (0000:00:12.0) NSID 3 from core 1: 3385.53 13.22 4729.08 947.22 13487.97 00:08:53.285 ======================================================== 00:08:53.285 Total : 20313.20 79.35 4727.26 774.18 14201.76 00:08:53.285 00:08:55.194 Initializing NVMe Controllers 00:08:55.194 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.194 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.194 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.194 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.194 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:55.194 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:55.194 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:55.194 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:55.194 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:55.194 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:55.194 Initialization complete. Launching workers. 00:08:55.194 ======================================================== 00:08:55.194 Latency(us) 00:08:55.194 Device Information : IOPS MiB/s Average min max 00:08:55.194 PCIE (0000:00:10.0) NSID 1 from core 2: 1670.44 6.53 9576.38 1023.02 29843.05 00:08:55.194 PCIE (0000:00:11.0) NSID 1 from core 2: 1671.84 6.53 9570.71 1078.23 37197.58 00:08:55.194 PCIE (0000:00:13.0) NSID 1 from core 2: 1671.84 6.53 9570.56 1042.14 32547.10 00:08:55.194 PCIE (0000:00:12.0) NSID 1 from core 2: 1671.84 6.53 9569.92 1030.08 35849.19 00:08:55.194 PCIE (0000:00:12.0) NSID 2 from core 2: 1671.84 6.53 9570.25 1069.52 34964.41 00:08:55.194 PCIE (0000:00:12.0) NSID 3 from core 2: 1671.84 6.53 9570.07 1034.04 31493.80 00:08:55.194 ======================================================== 00:08:55.194 Total : 10029.66 39.18 9571.31 1023.02 37197.58 00:08:55.194 00:08:55.456 02:19:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63769 00:08:55.456 02:19:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63770 00:08:55.456 00:08:55.456 real 0m10.805s 00:08:55.456 user 0m18.336s 00:08:55.456 sys 0m0.711s 00:08:55.456 ************************************ 00:08:55.456 END TEST nvme_multi_secondary 00:08:55.456 ************************************ 00:08:55.456 02:19:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:55.456 02:19:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:55.456 02:19:42 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:55.456 02:19:42 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:55.456 02:19:42 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/62730 ]] 00:08:55.456 02:19:42 nvme -- common/autotest_common.sh@1092 -- # kill 62730 00:08:55.456 02:19:42 nvme -- common/autotest_common.sh@1093 -- # wait 62730 00:08:55.456 [2024-11-04 02:19:42.397144] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.397254] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.397299] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.397336] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.401095] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.401176] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.401202] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.401230] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.404304] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.404338] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.404348] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.404359] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.405889] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.405924] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.405934] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 [2024-11-04 02:19:42.405946] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63648) is not found. Dropping the request. 00:08:55.456 02:19:42 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:08:55.456 02:19:42 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:08:55.456 02:19:42 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:55.456 02:19:42 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:55.456 02:19:42 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:55.456 02:19:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.456 ************************************ 00:08:55.456 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:55.456 ************************************ 00:08:55.456 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:55.717 * Looking for test storage... 00:08:55.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:55.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.717 --rc genhtml_branch_coverage=1 00:08:55.717 --rc genhtml_function_coverage=1 00:08:55.717 --rc genhtml_legend=1 00:08:55.717 --rc geninfo_all_blocks=1 00:08:55.717 --rc geninfo_unexecuted_blocks=1 00:08:55.717 00:08:55.717 ' 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:55.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.717 --rc genhtml_branch_coverage=1 00:08:55.717 --rc genhtml_function_coverage=1 00:08:55.717 --rc genhtml_legend=1 00:08:55.717 --rc geninfo_all_blocks=1 00:08:55.717 --rc geninfo_unexecuted_blocks=1 00:08:55.717 00:08:55.717 ' 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:55.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.717 --rc genhtml_branch_coverage=1 00:08:55.717 --rc genhtml_function_coverage=1 00:08:55.717 --rc genhtml_legend=1 00:08:55.717 --rc geninfo_all_blocks=1 00:08:55.717 --rc geninfo_unexecuted_blocks=1 00:08:55.717 00:08:55.717 ' 00:08:55.717 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:55.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.718 --rc genhtml_branch_coverage=1 00:08:55.718 --rc genhtml_function_coverage=1 00:08:55.718 --rc genhtml_legend=1 00:08:55.718 --rc geninfo_all_blocks=1 00:08:55.718 --rc geninfo_unexecuted_blocks=1 00:08:55.718 00:08:55.718 ' 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:55.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63933 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63933 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 63933 ']' 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.718 02:19:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.718 [2024-11-04 02:19:42.815086] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:08:55.718 [2024-11-04 02:19:42.815717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63933 ] 00:08:55.979 [2024-11-04 02:19:42.987269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.240 [2024-11-04 02:19:43.091853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.240 [2024-11-04 02:19:43.092052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.240 [2024-11-04 02:19:43.092792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.240 [2024-11-04 02:19:43.092921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:56.809 nvme0n1 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_hVVA6.txt 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:56.809 true 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730686783 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63956 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:56.809 02:19:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:58.714 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:58.714 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.714 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:58.714 [2024-11-04 02:19:45.785984] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:58.714 [2024-11-04 02:19:45.786261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:58.714 [2024-11-04 02:19:45.786289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:58.714 [2024-11-04 02:19:45.786302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.714 [2024-11-04 02:19:45.788327] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:58.714 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63956 00:08:58.714 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.714 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63956 00:08:58.714 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63956 00:08:58.714 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:58.714 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:58.714 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:58.714 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.714 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_hVVA6.txt 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_hVVA6.txt 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63933 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 63933 ']' 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 63933 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63933 00:08:58.973 killing process with pid 63933 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63933' 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 63933 00:08:58.973 02:19:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 63933 00:09:00.352 02:19:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:00.352 02:19:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:00.352 00:09:00.352 real 0m4.702s 00:09:00.352 user 0m16.676s 00:09:00.352 sys 0m0.507s 00:09:00.352 ************************************ 00:09:00.352 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:00.352 ************************************ 00:09:00.352 02:19:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.352 02:19:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.352 02:19:47 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:00.352 02:19:47 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:00.353 02:19:47 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:00.353 02:19:47 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.353 02:19:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.353 ************************************ 00:09:00.353 START TEST nvme_fio 00:09:00.353 ************************************ 00:09:00.353 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:09:00.353 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:00.353 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:00.353 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:00.353 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:00.353 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:00.353 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:00.353 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:00.353 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:00.353 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:00.353 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:00.353 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:00.353 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:00.353 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:00.353 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:00.353 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:00.614 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:00.614 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:00.875 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:00.875 02:19:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:00.875 02:19:47 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:00.875 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:00.875 fio-3.35 00:09:00.875 Starting 1 thread 00:09:06.159 00:09:06.159 test: (groupid=0, jobs=1): err= 0: pid=64096: Mon Nov 4 02:19:52 2024 00:09:06.159 read: IOPS=21.0k, BW=81.9MiB/s (85.9MB/s)(164MiB/2001msec) 00:09:06.159 slat (usec): min=3, max=167, avg= 5.07, stdev= 2.37 00:09:06.159 clat (usec): min=193, max=9283, avg=3046.11, stdev=934.53 00:09:06.159 lat (usec): min=197, max=9339, avg=3051.18, stdev=935.49 00:09:06.159 clat percentiles (usec): 00:09:06.159 | 1.00th=[ 1926], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:06.159 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2868], 00:09:06.159 | 70.00th=[ 3032], 80.00th=[ 3392], 90.00th=[ 4228], 95.00th=[ 5211], 00:09:06.159 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7504], 99.95th=[ 8094], 00:09:06.159 | 99.99th=[ 8979] 00:09:06.159 bw ( KiB/s): min=80248, max=88160, per=100.00%, avg=85077.33, stdev=4235.34, samples=3 00:09:06.159 iops : min=20062, max=22040, avg=21269.33, stdev=1058.83, samples=3 00:09:06.159 write: IOPS=20.9k, BW=81.5MiB/s (85.4MB/s)(163MiB/2001msec); 0 zone resets 00:09:06.159 slat (nsec): min=3442, max=82857, avg=5356.43, stdev=2270.51 00:09:06.159 clat (usec): min=201, max=9521, avg=3051.03, stdev=931.94 00:09:06.159 lat (usec): min=205, max=9526, avg=3056.38, stdev=932.94 00:09:06.159 clat percentiles (usec): 00:09:06.159 | 1.00th=[ 1975], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2507], 00:09:06.159 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2868], 00:09:06.159 | 70.00th=[ 3032], 80.00th=[ 3392], 90.00th=[ 4178], 95.00th=[ 5211], 00:09:06.159 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7504], 99.95th=[ 8094], 00:09:06.159 | 99.99th=[ 8979] 00:09:06.159 bw ( KiB/s): min=80160, max=88248, per=100.00%, avg=85237.33, stdev=4422.36, samples=3 00:09:06.159 iops : min=20040, max=22062, avg=21309.33, stdev=1105.59, samples=3 00:09:06.159 lat (usec) : 250=0.01%, 500=0.02%, 750=0.03%, 1000=0.04% 00:09:06.159 lat (msec) : 2=1.11%, 4=87.09%, 10=11.71% 00:09:06.159 cpu : usr=99.20%, sys=0.05%, ctx=3, majf=0, minf=607 00:09:06.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:06.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.159 issued rwts: total=41955,41734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.159 00:09:06.160 Run status group 0 (all jobs): 00:09:06.160 READ: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=164MiB (172MB), run=2001-2001msec 00:09:06.160 WRITE: bw=81.5MiB/s (85.4MB/s), 81.5MiB/s-81.5MiB/s (85.4MB/s-85.4MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:06.160 ----------------------------------------------------- 00:09:06.160 Suppressions used: 00:09:06.160 count bytes template 00:09:06.160 1 32 /usr/src/fio/parse.c 00:09:06.160 1 8 libtcmalloc_minimal.so 00:09:06.160 ----------------------------------------------------- 00:09:06.160 00:09:06.160 02:19:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:06.160 02:19:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:06.160 02:19:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:06.160 02:19:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:06.160 02:19:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:06.160 02:19:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:06.421 02:19:53 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:06.421 02:19:53 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:06.421 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:06.422 02:19:53 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:06.682 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:06.682 fio-3.35 00:09:06.682 Starting 1 thread 00:09:11.971 00:09:11.971 test: (groupid=0, jobs=1): err= 0: pid=64152: Mon Nov 4 02:19:59 2024 00:09:11.971 read: IOPS=20.8k, BW=81.2MiB/s (85.1MB/s)(162MiB/2001msec) 00:09:11.971 slat (nsec): min=4802, max=83925, avg=5854.28, stdev=2163.07 00:09:11.971 clat (usec): min=312, max=9380, avg=3064.25, stdev=916.39 00:09:11.971 lat (usec): min=319, max=9385, avg=3070.11, stdev=917.60 00:09:11.971 clat percentiles (usec): 00:09:11.971 | 1.00th=[ 2073], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:11.971 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2868], 00:09:11.971 | 70.00th=[ 3032], 80.00th=[ 3294], 90.00th=[ 4228], 95.00th=[ 5080], 00:09:11.971 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7570], 99.95th=[ 8029], 00:09:11.971 | 99.99th=[ 8717] 00:09:11.971 bw ( KiB/s): min=80384, max=87720, per=100.00%, avg=83882.67, stdev=3679.71, samples=3 00:09:11.971 iops : min=20096, max=21930, avg=20970.67, stdev=919.93, samples=3 00:09:11.971 write: IOPS=20.7k, BW=80.9MiB/s (84.8MB/s)(162MiB/2001msec); 0 zone resets 00:09:11.971 slat (nsec): min=4944, max=62586, avg=6128.33, stdev=2140.64 00:09:11.971 clat (usec): min=556, max=8774, avg=3079.87, stdev=914.49 00:09:11.971 lat (usec): min=564, max=8794, avg=3085.99, stdev=915.68 00:09:11.971 clat percentiles (usec): 00:09:11.971 | 1.00th=[ 2089], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:11.971 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2900], 00:09:11.971 | 70.00th=[ 3032], 80.00th=[ 3326], 90.00th=[ 4228], 95.00th=[ 5145], 00:09:11.971 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7635], 99.95th=[ 7963], 00:09:11.971 | 99.99th=[ 8291] 00:09:11.972 bw ( KiB/s): min=80952, max=87296, per=100.00%, avg=83986.67, stdev=3180.91, samples=3 00:09:11.972 iops : min=20238, max=21824, avg=20996.67, stdev=795.23, samples=3 00:09:11.972 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:11.972 lat (msec) : 2=0.42%, 4=87.49%, 10=12.06% 00:09:11.972 cpu : usr=99.10%, sys=0.20%, ctx=15, majf=0, minf=607 00:09:11.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:11.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.972 issued rwts: total=41589,41418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.972 00:09:11.972 Run status group 0 (all jobs): 00:09:11.972 READ: bw=81.2MiB/s (85.1MB/s), 81.2MiB/s-81.2MiB/s (85.1MB/s-85.1MB/s), io=162MiB (170MB), run=2001-2001msec 00:09:11.972 WRITE: bw=80.9MiB/s (84.8MB/s), 80.9MiB/s-80.9MiB/s (84.8MB/s-84.8MB/s), io=162MiB (170MB), run=2001-2001msec 00:09:12.232 ----------------------------------------------------- 00:09:12.232 Suppressions used: 00:09:12.232 count bytes template 00:09:12.232 1 32 /usr/src/fio/parse.c 00:09:12.232 1 8 libtcmalloc_minimal.so 00:09:12.232 ----------------------------------------------------- 00:09:12.232 00:09:12.232 02:19:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:12.232 02:19:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:12.232 02:19:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:12.232 02:19:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:12.494 02:19:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:12.494 02:19:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:12.755 02:19:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:12.755 02:19:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:12.755 02:19:59 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:12.755 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:12.755 fio-3.35 00:09:12.755 Starting 1 thread 00:09:18.045 00:09:18.045 test: (groupid=0, jobs=1): err= 0: pid=64213: Mon Nov 4 02:20:04 2024 00:09:18.045 read: IOPS=15.4k, BW=60.3MiB/s (63.3MB/s)(121MiB/2001msec) 00:09:18.045 slat (nsec): min=4243, max=84881, avg=5918.94, stdev=3344.09 00:09:18.045 clat (usec): min=738, max=11979, avg=4123.45, stdev=1516.63 00:09:18.045 lat (usec): min=744, max=11989, avg=4129.37, stdev=1517.93 00:09:18.045 clat percentiles (usec): 00:09:18.045 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2737], 00:09:18.045 | 30.00th=[ 2900], 40.00th=[ 3130], 50.00th=[ 3589], 60.00th=[ 4490], 00:09:18.045 | 70.00th=[ 5080], 80.00th=[ 5538], 90.00th=[ 6194], 95.00th=[ 6718], 00:09:18.045 | 99.00th=[ 8160], 99.50th=[ 8848], 99.90th=[10552], 99.95th=[11207], 00:09:18.045 | 99.99th=[11731] 00:09:18.045 bw ( KiB/s): min=52928, max=78680, per=100.00%, avg=64738.67, stdev=13007.54, samples=3 00:09:18.045 iops : min=13232, max=19670, avg=16184.67, stdev=3251.89, samples=3 00:09:18.045 write: IOPS=15.4k, BW=60.3MiB/s (63.3MB/s)(121MiB/2001msec); 0 zone resets 00:09:18.045 slat (nsec): min=4301, max=68021, avg=6164.67, stdev=3374.70 00:09:18.045 clat (usec): min=749, max=12026, avg=4134.32, stdev=1506.05 00:09:18.045 lat (usec): min=756, max=12037, avg=4140.49, stdev=1507.30 00:09:18.045 clat percentiles (usec): 00:09:18.045 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2737], 00:09:18.045 | 30.00th=[ 2900], 40.00th=[ 3163], 50.00th=[ 3621], 60.00th=[ 4490], 00:09:18.045 | 70.00th=[ 5080], 80.00th=[ 5538], 90.00th=[ 6194], 95.00th=[ 6718], 00:09:18.045 | 99.00th=[ 8094], 99.50th=[ 8717], 99.90th=[10683], 99.95th=[11207], 00:09:18.045 | 99.99th=[11863] 00:09:18.045 bw ( KiB/s): min=52032, max=78456, per=100.00%, avg=64493.33, stdev=13275.82, samples=3 00:09:18.045 iops : min=13008, max=19614, avg=16123.33, stdev=3318.96, samples=3 00:09:18.045 lat (usec) : 750=0.01%, 1000=0.02% 00:09:18.045 lat (msec) : 2=0.13%, 4=54.29%, 10=45.38%, 20=0.18% 00:09:18.046 cpu : usr=98.75%, sys=0.10%, ctx=15, majf=0, minf=607 00:09:18.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:18.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.046 issued rwts: total=30900,30914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.046 00:09:18.046 Run status group 0 (all jobs): 00:09:18.046 READ: bw=60.3MiB/s (63.3MB/s), 60.3MiB/s-60.3MiB/s (63.3MB/s-63.3MB/s), io=121MiB (127MB), run=2001-2001msec 00:09:18.046 WRITE: bw=60.3MiB/s (63.3MB/s), 60.3MiB/s-60.3MiB/s (63.3MB/s-63.3MB/s), io=121MiB (127MB), run=2001-2001msec 00:09:18.046 ----------------------------------------------------- 00:09:18.046 Suppressions used: 00:09:18.046 count bytes template 00:09:18.046 1 32 /usr/src/fio/parse.c 00:09:18.046 1 8 libtcmalloc_minimal.so 00:09:18.046 ----------------------------------------------------- 00:09:18.046 00:09:18.046 02:20:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:18.046 02:20:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:18.046 02:20:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:18.046 02:20:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:18.307 02:20:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:18.307 02:20:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:18.568 02:20:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:18.568 02:20:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:18.568 02:20:05 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:18.830 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:18.830 fio-3.35 00:09:18.830 Starting 1 thread 00:09:27.033 00:09:27.033 test: (groupid=0, jobs=1): err= 0: pid=64279: Mon Nov 4 02:20:12 2024 00:09:27.033 read: IOPS=17.2k, BW=67.0MiB/s (70.3MB/s)(134MiB/2001msec) 00:09:27.033 slat (nsec): min=4246, max=78663, avg=5530.82, stdev=2625.25 00:09:27.033 clat (usec): min=465, max=9578, avg=3708.71, stdev=1338.21 00:09:27.033 lat (usec): min=469, max=9597, avg=3714.24, stdev=1339.05 00:09:27.033 clat percentiles (usec): 00:09:27.033 | 1.00th=[ 2073], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2606], 00:09:27.033 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3163], 60.00th=[ 3589], 00:09:27.033 | 70.00th=[ 4228], 80.00th=[ 5014], 90.00th=[ 5800], 95.00th=[ 6325], 00:09:27.033 | 99.00th=[ 7373], 99.50th=[ 7832], 99.90th=[ 8586], 99.95th=[ 8848], 00:09:27.033 | 99.99th=[ 9110] 00:09:27.033 bw ( KiB/s): min=58304, max=80368, per=96.11%, avg=65958.67, stdev=12486.96, samples=3 00:09:27.033 iops : min=14576, max=20092, avg=16489.67, stdev=3121.74, samples=3 00:09:27.033 write: IOPS=17.2k, BW=67.1MiB/s (70.4MB/s)(134MiB/2001msec); 0 zone resets 00:09:27.033 slat (nsec): min=4305, max=49575, avg=5781.56, stdev=2558.79 00:09:27.033 clat (usec): min=457, max=9470, avg=3719.85, stdev=1344.20 00:09:27.033 lat (usec): min=461, max=9477, avg=3725.63, stdev=1345.04 00:09:27.033 clat percentiles (usec): 00:09:27.033 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2638], 00:09:27.033 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3163], 60.00th=[ 3589], 00:09:27.033 | 70.00th=[ 4228], 80.00th=[ 5014], 90.00th=[ 5800], 95.00th=[ 6390], 00:09:27.033 | 99.00th=[ 7439], 99.50th=[ 7898], 99.90th=[ 8586], 99.95th=[ 8717], 00:09:27.033 | 99.99th=[ 9110] 00:09:27.033 bw ( KiB/s): min=58250, max=80832, per=95.89%, avg=65904.67, stdev=12928.86, samples=3 00:09:27.033 iops : min=14562, max=20208, avg=16476.00, stdev=3232.36, samples=3 00:09:27.033 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:09:27.033 lat (msec) : 2=0.45%, 4=66.45%, 10=33.04% 00:09:27.033 cpu : usr=99.05%, sys=0.00%, ctx=3, majf=0, minf=605 00:09:27.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:27.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.033 issued rwts: total=34331,34382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.033 00:09:27.034 Run status group 0 (all jobs): 00:09:27.034 READ: bw=67.0MiB/s (70.3MB/s), 67.0MiB/s-67.0MiB/s (70.3MB/s-70.3MB/s), io=134MiB (141MB), run=2001-2001msec 00:09:27.034 WRITE: bw=67.1MiB/s (70.4MB/s), 67.1MiB/s-67.1MiB/s (70.4MB/s-70.4MB/s), io=134MiB (141MB), run=2001-2001msec 00:09:27.034 ----------------------------------------------------- 00:09:27.034 Suppressions used: 00:09:27.034 count bytes template 00:09:27.034 1 32 /usr/src/fio/parse.c 00:09:27.034 1 8 libtcmalloc_minimal.so 00:09:27.034 ----------------------------------------------------- 00:09:27.034 00:09:27.034 02:20:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:27.034 02:20:13 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:27.034 00:09:27.034 real 0m25.749s 00:09:27.034 user 0m21.556s 00:09:27.034 sys 0m4.062s 00:09:27.034 02:20:13 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.034 02:20:13 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 ************************************ 00:09:27.034 END TEST nvme_fio 00:09:27.034 ************************************ 00:09:27.034 00:09:27.034 real 1m35.064s 00:09:27.034 user 3m41.466s 00:09:27.034 sys 0m14.757s 00:09:27.034 02:20:13 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.034 02:20:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 ************************************ 00:09:27.034 END TEST nvme 00:09:27.034 ************************************ 00:09:27.034 02:20:13 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:27.034 02:20:13 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:27.034 02:20:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:27.034 02:20:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:27.034 02:20:13 -- common/autotest_common.sh@10 -- # set +x 00:09:27.034 ************************************ 00:09:27.034 START TEST nvme_scc 00:09:27.034 ************************************ 00:09:27.034 02:20:13 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:27.034 * Looking for test storage... 00:09:27.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:27.034 02:20:13 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:27.034 02:20:13 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:27.034 02:20:13 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:27.034 02:20:13 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:27.034 02:20:13 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.034 02:20:13 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:27.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.034 --rc genhtml_branch_coverage=1 00:09:27.034 --rc genhtml_function_coverage=1 00:09:27.034 --rc genhtml_legend=1 00:09:27.034 --rc geninfo_all_blocks=1 00:09:27.034 --rc geninfo_unexecuted_blocks=1 00:09:27.034 00:09:27.034 ' 00:09:27.034 02:20:13 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:27.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.034 --rc genhtml_branch_coverage=1 00:09:27.034 --rc genhtml_function_coverage=1 00:09:27.034 --rc genhtml_legend=1 00:09:27.034 --rc geninfo_all_blocks=1 00:09:27.034 --rc geninfo_unexecuted_blocks=1 00:09:27.034 00:09:27.034 ' 00:09:27.034 02:20:13 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:27.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.034 --rc genhtml_branch_coverage=1 00:09:27.034 --rc genhtml_function_coverage=1 00:09:27.034 --rc genhtml_legend=1 00:09:27.034 --rc geninfo_all_blocks=1 00:09:27.034 --rc geninfo_unexecuted_blocks=1 00:09:27.034 00:09:27.034 ' 00:09:27.034 02:20:13 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:27.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.034 --rc genhtml_branch_coverage=1 00:09:27.034 --rc genhtml_function_coverage=1 00:09:27.034 --rc genhtml_legend=1 00:09:27.034 --rc geninfo_all_blocks=1 00:09:27.034 --rc geninfo_unexecuted_blocks=1 00:09:27.034 00:09:27.034 ' 00:09:27.034 02:20:13 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.034 02:20:13 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.034 02:20:13 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.034 02:20:13 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.034 02:20:13 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.034 02:20:13 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:27.034 02:20:13 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:27.034 02:20:13 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:27.034 02:20:13 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.034 02:20:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:27.034 02:20:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:27.034 02:20:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:27.034 02:20:13 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:27.034 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.034 Waiting for block devices as requested 00:09:27.034 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.034 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.034 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.034 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.334 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:32.334 02:20:19 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:32.334 02:20:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.334 02:20:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:32.334 02:20:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.334 02:20:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:32.334 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.335 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.336 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.337 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.338 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.339 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.340 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:32.341 02:20:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.341 02:20:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:32.341 02:20:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.341 02:20:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.341 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.342 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:32.343 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.344 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.345 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:32.346 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.347 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:32.348 02:20:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.348 02:20:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:32.348 02:20:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.348 02:20:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.348 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:32.349 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.350 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:32.351 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.352 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.353 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.354 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:32.355 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:32.356 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.357 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:32.358 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:32.359 02:20:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.359 02:20:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:32.359 02:20:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.359 02:20:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.359 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.360 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.361 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:32.630 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.631 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:32.632 02:20:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:32.632 02:20:19 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:32.632 02:20:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:32.632 02:20:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:32.633 02:20:19 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:32.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.467 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.467 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.467 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.467 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.728 02:20:20 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:33.728 02:20:20 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:33.728 02:20:20 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.728 02:20:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:33.728 ************************************ 00:09:33.728 START TEST nvme_simple_copy 00:09:33.728 ************************************ 00:09:33.728 02:20:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:33.989 Initializing NVMe Controllers 00:09:33.989 Attaching to 0000:00:10.0 00:09:33.989 Controller supports SCC. Attached to 0000:00:10.0 00:09:33.989 Namespace ID: 1 size: 6GB 00:09:33.989 Initialization complete. 00:09:33.989 00:09:33.989 Controller QEMU NVMe Ctrl (12340 ) 00:09:33.989 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:33.989 Namespace Block Size:4096 00:09:33.989 Writing LBAs 0 to 63 with Random Data 00:09:33.989 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:33.989 LBAs matching Written Data: 64 00:09:33.989 00:09:33.989 real 0m0.284s 00:09:33.989 user 0m0.107s 00:09:33.989 sys 0m0.074s 00:09:33.989 02:20:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.989 ************************************ 00:09:33.989 END TEST nvme_simple_copy 00:09:33.989 ************************************ 00:09:33.989 02:20:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:33.989 00:09:33.989 real 0m7.795s 00:09:33.989 user 0m1.087s 00:09:33.989 sys 0m1.454s 00:09:33.989 02:20:20 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.989 02:20:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:33.989 ************************************ 00:09:33.989 END TEST nvme_scc 00:09:33.989 ************************************ 00:09:33.989 02:20:21 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:33.989 02:20:21 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:33.989 02:20:21 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:33.989 02:20:21 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:33.989 02:20:21 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:33.989 02:20:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:33.989 02:20:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.989 02:20:21 -- common/autotest_common.sh@10 -- # set +x 00:09:33.989 ************************************ 00:09:33.989 START TEST nvme_fdp 00:09:33.989 ************************************ 00:09:33.989 02:20:21 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:09:33.989 * Looking for test storage... 00:09:33.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:33.989 02:20:21 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:33.989 02:20:21 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:33.989 02:20:21 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:34.251 02:20:21 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:34.251 02:20:21 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.251 02:20:21 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:34.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.251 --rc genhtml_branch_coverage=1 00:09:34.251 --rc genhtml_function_coverage=1 00:09:34.251 --rc genhtml_legend=1 00:09:34.251 --rc geninfo_all_blocks=1 00:09:34.251 --rc geninfo_unexecuted_blocks=1 00:09:34.251 00:09:34.251 ' 00:09:34.251 02:20:21 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:34.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.251 --rc genhtml_branch_coverage=1 00:09:34.251 --rc genhtml_function_coverage=1 00:09:34.251 --rc genhtml_legend=1 00:09:34.251 --rc geninfo_all_blocks=1 00:09:34.251 --rc geninfo_unexecuted_blocks=1 00:09:34.251 00:09:34.251 ' 00:09:34.251 02:20:21 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:34.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.251 --rc genhtml_branch_coverage=1 00:09:34.251 --rc genhtml_function_coverage=1 00:09:34.251 --rc genhtml_legend=1 00:09:34.251 --rc geninfo_all_blocks=1 00:09:34.251 --rc geninfo_unexecuted_blocks=1 00:09:34.251 00:09:34.251 ' 00:09:34.251 02:20:21 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:34.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.251 --rc genhtml_branch_coverage=1 00:09:34.251 --rc genhtml_function_coverage=1 00:09:34.251 --rc genhtml_legend=1 00:09:34.251 --rc geninfo_all_blocks=1 00:09:34.251 --rc geninfo_unexecuted_blocks=1 00:09:34.251 00:09:34.251 ' 00:09:34.251 02:20:21 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:34.251 02:20:21 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:34.251 02:20:21 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:34.251 02:20:21 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:34.251 02:20:21 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.251 02:20:21 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.251 02:20:21 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.251 02:20:21 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.252 02:20:21 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.252 02:20:21 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:34.252 02:20:21 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.252 02:20:21 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:34.252 02:20:21 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:34.252 02:20:21 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:34.252 02:20:21 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:34.252 02:20:21 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:34.252 02:20:21 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:34.252 02:20:21 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:34.252 02:20:21 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:34.252 02:20:21 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:34.252 02:20:21 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.252 02:20:21 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:34.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.774 Waiting for block devices as requested 00:09:34.774 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.774 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.774 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.034 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:40.337 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:40.337 02:20:27 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:40.337 02:20:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.337 02:20:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:40.337 02:20:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.337 02:20:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.337 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:40.338 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.339 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.340 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:40.341 02:20:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.341 02:20:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:40.341 02:20:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.341 02:20:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:40.341 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:40.342 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.343 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:40.344 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.345 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:40.346 02:20:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.346 02:20:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:40.346 02:20:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.346 02:20:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.346 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:40.347 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.348 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:40.349 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.350 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.351 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.352 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.353 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:40.354 02:20:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.354 02:20:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:40.354 02:20:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.354 02:20:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.354 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:40.355 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:40.356 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.357 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:40.358 02:20:27 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:40.358 02:20:27 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:40.358 02:20:27 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:40.358 02:20:27 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:40.358 02:20:27 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:40.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.504 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.504 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.504 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.504 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.504 02:20:28 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:41.504 02:20:28 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:41.504 02:20:28 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:41.504 02:20:28 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:41.504 ************************************ 00:09:41.504 START TEST nvme_flexible_data_placement 00:09:41.504 ************************************ 00:09:41.504 02:20:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:41.769 Initializing NVMe Controllers 00:09:41.769 Attaching to 0000:00:13.0 00:09:41.769 Controller supports FDP Attached to 0000:00:13.0 00:09:41.769 Namespace ID: 1 Endurance Group ID: 1 00:09:41.769 Initialization complete. 00:09:41.769 00:09:41.769 ================================== 00:09:41.769 == FDP tests for Namespace: #01 == 00:09:41.769 ================================== 00:09:41.769 00:09:41.769 Get Feature: FDP: 00:09:41.769 ================= 00:09:41.769 Enabled: Yes 00:09:41.769 FDP configuration Index: 0 00:09:41.769 00:09:41.769 FDP configurations log page 00:09:41.769 =========================== 00:09:41.769 Number of FDP configurations: 1 00:09:41.769 Version: 0 00:09:41.769 Size: 112 00:09:41.769 FDP Configuration Descriptor: 0 00:09:41.769 Descriptor Size: 96 00:09:41.769 Reclaim Group Identifier format: 2 00:09:41.769 FDP Volatile Write Cache: Not Present 00:09:41.769 FDP Configuration: Valid 00:09:41.769 Vendor Specific Size: 0 00:09:41.769 Number of Reclaim Groups: 2 00:09:41.769 Number of Recalim Unit Handles: 8 00:09:41.769 Max Placement Identifiers: 128 00:09:41.769 Number of Namespaces Suppprted: 256 00:09:41.769 Reclaim unit Nominal Size: 6000000 bytes 00:09:41.769 Estimated Reclaim Unit Time Limit: Not Reported 00:09:41.769 RUH Desc #000: RUH Type: Initially Isolated 00:09:41.769 RUH Desc #001: RUH Type: Initially Isolated 00:09:41.769 RUH Desc #002: RUH Type: Initially Isolated 00:09:41.769 RUH Desc #003: RUH Type: Initially Isolated 00:09:41.769 RUH Desc #004: RUH Type: Initially Isolated 00:09:41.769 RUH Desc #005: RUH Type: Initially Isolated 00:09:41.769 RUH Desc #006: RUH Type: Initially Isolated 00:09:41.769 RUH Desc #007: RUH Type: Initially Isolated 00:09:41.769 00:09:41.769 FDP reclaim unit handle usage log page 00:09:41.769 ====================================== 00:09:41.769 Number of Reclaim Unit Handles: 8 00:09:41.769 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:41.769 RUH Usage Desc #001: RUH Attributes: Unused 00:09:41.769 RUH Usage Desc #002: RUH Attributes: Unused 00:09:41.769 RUH Usage Desc #003: RUH Attributes: Unused 00:09:41.769 RUH Usage Desc #004: RUH Attributes: Unused 00:09:41.769 RUH Usage Desc #005: RUH Attributes: Unused 00:09:41.769 RUH Usage Desc #006: RUH Attributes: Unused 00:09:41.769 RUH Usage Desc #007: RUH Attributes: Unused 00:09:41.769 00:09:41.769 FDP statistics log page 00:09:41.769 ======================= 00:09:41.769 Host bytes with metadata written: 1110872064 00:09:41.769 Media bytes with metadata written: 1114292224 00:09:41.769 Media bytes erased: 0 00:09:41.769 00:09:41.769 FDP Reclaim unit handle status 00:09:41.769 ============================== 00:09:41.769 Number of RUHS descriptors: 2 00:09:41.769 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005c97 00:09:41.769 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:41.769 00:09:41.769 FDP write on placement id: 0 success 00:09:41.769 00:09:41.769 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:41.769 00:09:41.769 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:41.769 00:09:41.769 Get Feature: FDP Events for Placement handle: #0 00:09:41.769 ======================== 00:09:41.769 Number of FDP Events: 6 00:09:41.769 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:41.769 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:41.769 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:41.769 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:41.769 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:41.769 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:41.769 00:09:41.769 FDP events log page 00:09:41.769 =================== 00:09:41.769 Number of FDP events: 1 00:09:41.769 FDP Event #0: 00:09:41.769 Event Type: RU Not Written to Capacity 00:09:41.769 Placement Identifier: Valid 00:09:41.769 NSID: Valid 00:09:41.769 Location: Valid 00:09:41.769 Placement Identifier: 0 00:09:41.769 Event Timestamp: 8 00:09:41.769 Namespace Identifier: 1 00:09:41.769 Reclaim Group Identifier: 0 00:09:41.769 Reclaim Unit Handle Identifier: 0 00:09:41.769 00:09:41.769 FDP test passed 00:09:41.769 00:09:41.769 real 0m0.254s 00:09:41.769 user 0m0.080s 00:09:41.769 sys 0m0.072s 00:09:41.769 ************************************ 00:09:41.769 END TEST nvme_flexible_data_placement 00:09:41.770 ************************************ 00:09:41.770 02:20:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:41.770 02:20:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:41.770 00:09:41.770 real 0m7.800s 00:09:41.770 user 0m1.030s 00:09:41.770 sys 0m1.514s 00:09:41.770 02:20:28 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:41.770 ************************************ 00:09:41.770 END TEST nvme_fdp 00:09:41.770 ************************************ 00:09:41.770 02:20:28 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:41.770 02:20:28 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:41.770 02:20:28 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:41.770 02:20:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:41.770 02:20:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:41.770 02:20:28 -- common/autotest_common.sh@10 -- # set +x 00:09:41.770 ************************************ 00:09:41.770 START TEST nvme_rpc 00:09:41.770 ************************************ 00:09:41.770 02:20:28 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:42.030 * Looking for test storage... 00:09:42.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.030 02:20:28 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:42.030 02:20:28 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:42.030 02:20:28 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:42.030 02:20:29 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.030 02:20:29 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:42.031 02:20:29 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.031 02:20:29 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:42.031 02:20:29 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:42.031 02:20:29 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.031 02:20:29 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:42.031 02:20:29 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.031 02:20:29 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.031 02:20:29 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.031 02:20:29 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.031 --rc genhtml_branch_coverage=1 00:09:42.031 --rc genhtml_function_coverage=1 00:09:42.031 --rc genhtml_legend=1 00:09:42.031 --rc geninfo_all_blocks=1 00:09:42.031 --rc geninfo_unexecuted_blocks=1 00:09:42.031 00:09:42.031 ' 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.031 --rc genhtml_branch_coverage=1 00:09:42.031 --rc genhtml_function_coverage=1 00:09:42.031 --rc genhtml_legend=1 00:09:42.031 --rc geninfo_all_blocks=1 00:09:42.031 --rc geninfo_unexecuted_blocks=1 00:09:42.031 00:09:42.031 ' 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.031 --rc genhtml_branch_coverage=1 00:09:42.031 --rc genhtml_function_coverage=1 00:09:42.031 --rc genhtml_legend=1 00:09:42.031 --rc geninfo_all_blocks=1 00:09:42.031 --rc geninfo_unexecuted_blocks=1 00:09:42.031 00:09:42.031 ' 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:42.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.031 --rc genhtml_branch_coverage=1 00:09:42.031 --rc genhtml_function_coverage=1 00:09:42.031 --rc genhtml_legend=1 00:09:42.031 --rc geninfo_all_blocks=1 00:09:42.031 --rc geninfo_unexecuted_blocks=1 00:09:42.031 00:09:42.031 ' 00:09:42.031 02:20:29 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.031 02:20:29 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:42.031 02:20:29 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:42.031 02:20:29 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:42.031 02:20:29 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65637 00:09:42.031 02:20:29 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:42.031 02:20:29 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65637 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 65637 ']' 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.031 02:20:29 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.292 [2024-11-04 02:20:29.180013] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:09:42.292 [2024-11-04 02:20:29.180162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65637 ] 00:09:42.292 [2024-11-04 02:20:29.343365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.552 [2024-11-04 02:20:29.469653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.552 [2024-11-04 02:20:29.469741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.123 02:20:30 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:43.123 02:20:30 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:43.123 02:20:30 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:43.384 Nvme0n1 00:09:43.384 02:20:30 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:43.384 02:20:30 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:43.644 request: 00:09:43.644 { 00:09:43.644 "bdev_name": "Nvme0n1", 00:09:43.644 "filename": "non_existing_file", 00:09:43.644 "method": "bdev_nvme_apply_firmware", 00:09:43.644 "req_id": 1 00:09:43.644 } 00:09:43.644 Got JSON-RPC error response 00:09:43.644 response: 00:09:43.644 { 00:09:43.644 "code": -32603, 00:09:43.644 "message": "open file failed." 00:09:43.644 } 00:09:43.644 02:20:30 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:43.644 02:20:30 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:43.644 02:20:30 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:43.905 02:20:30 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:43.905 02:20:30 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65637 00:09:43.905 02:20:30 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 65637 ']' 00:09:43.905 02:20:30 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 65637 00:09:43.905 02:20:30 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:09:43.905 02:20:30 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:43.905 02:20:30 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65637 00:09:43.905 02:20:30 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:43.905 killing process with pid 65637 00:09:43.905 02:20:30 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:43.905 02:20:30 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65637' 00:09:43.905 02:20:30 nvme_rpc -- common/autotest_common.sh@971 -- # kill 65637 00:09:43.905 02:20:30 nvme_rpc -- common/autotest_common.sh@976 -- # wait 65637 00:09:45.825 00:09:45.825 real 0m3.622s 00:09:45.825 user 0m6.736s 00:09:45.825 sys 0m0.643s 00:09:45.825 02:20:32 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:45.825 ************************************ 00:09:45.825 END TEST nvme_rpc 00:09:45.825 02:20:32 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.825 ************************************ 00:09:45.825 02:20:32 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:45.825 02:20:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:45.825 02:20:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:45.825 02:20:32 -- common/autotest_common.sh@10 -- # set +x 00:09:45.825 ************************************ 00:09:45.825 START TEST nvme_rpc_timeouts 00:09:45.825 ************************************ 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:45.825 * Looking for test storage... 00:09:45.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.825 02:20:32 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.825 --rc genhtml_branch_coverage=1 00:09:45.825 --rc genhtml_function_coverage=1 00:09:45.825 --rc genhtml_legend=1 00:09:45.825 --rc geninfo_all_blocks=1 00:09:45.825 --rc geninfo_unexecuted_blocks=1 00:09:45.825 00:09:45.825 ' 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.825 --rc genhtml_branch_coverage=1 00:09:45.825 --rc genhtml_function_coverage=1 00:09:45.825 --rc genhtml_legend=1 00:09:45.825 --rc geninfo_all_blocks=1 00:09:45.825 --rc geninfo_unexecuted_blocks=1 00:09:45.825 00:09:45.825 ' 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.825 --rc genhtml_branch_coverage=1 00:09:45.825 --rc genhtml_function_coverage=1 00:09:45.825 --rc genhtml_legend=1 00:09:45.825 --rc geninfo_all_blocks=1 00:09:45.825 --rc geninfo_unexecuted_blocks=1 00:09:45.825 00:09:45.825 ' 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.825 --rc genhtml_branch_coverage=1 00:09:45.825 --rc genhtml_function_coverage=1 00:09:45.825 --rc genhtml_legend=1 00:09:45.825 --rc geninfo_all_blocks=1 00:09:45.825 --rc geninfo_unexecuted_blocks=1 00:09:45.825 00:09:45.825 ' 00:09:45.825 02:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.825 02:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65710 00:09:45.825 02:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65710 00:09:45.825 02:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65742 00:09:45.825 02:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:45.825 02:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65742 00:09:45.825 02:20:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 65742 ']' 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:45.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:45.825 02:20:32 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:45.825 [2024-11-04 02:20:32.804331] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:09:45.825 [2024-11-04 02:20:32.804478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65742 ] 00:09:46.087 [2024-11-04 02:20:32.964795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:46.087 [2024-11-04 02:20:33.093286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.087 [2024-11-04 02:20:33.093374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.032 02:20:33 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:47.032 Checking default timeout settings: 00:09:47.032 02:20:33 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:09:47.032 02:20:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:47.032 02:20:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:47.292 Making settings changes with rpc: 00:09:47.292 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:47.292 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:47.551 Check default vs. modified settings: 00:09:47.551 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:47.551 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65710 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65710 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.810 Setting action_on_timeout is changed as expected. 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65710 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65710 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:47.810 Setting timeout_us is changed as expected. 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65710 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65710 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:47.810 Setting timeout_admin_us is changed as expected. 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65710 /tmp/settings_modified_65710 00:09:47.810 02:20:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65742 00:09:47.810 02:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 65742 ']' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 65742 00:09:47.810 02:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:09:47.810 02:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65742 00:09:47.810 killing process with pid 65742 00:09:47.810 02:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:47.810 02:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65742' 00:09:47.810 02:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 65742 00:09:47.810 02:20:34 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 65742 00:09:49.220 RPC TIMEOUT SETTING TEST PASSED. 00:09:49.220 02:20:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:49.220 ************************************ 00:09:49.220 END TEST nvme_rpc_timeouts 00:09:49.220 ************************************ 00:09:49.220 00:09:49.220 real 0m3.431s 00:09:49.220 user 0m6.649s 00:09:49.220 sys 0m0.624s 00:09:49.220 02:20:35 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:49.220 02:20:35 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:49.220 02:20:36 -- spdk/autotest.sh@239 -- # uname -s 00:09:49.220 02:20:36 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:49.220 02:20:36 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:49.220 02:20:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:49.220 02:20:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:49.220 02:20:36 -- common/autotest_common.sh@10 -- # set +x 00:09:49.220 ************************************ 00:09:49.220 START TEST sw_hotplug 00:09:49.220 ************************************ 00:09:49.220 02:20:36 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:49.220 * Looking for test storage... 00:09:49.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:49.220 02:20:36 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:49.220 02:20:36 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:09:49.220 02:20:36 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:49.220 02:20:36 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.220 02:20:36 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:49.220 02:20:36 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.220 02:20:36 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:49.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.220 --rc genhtml_branch_coverage=1 00:09:49.220 --rc genhtml_function_coverage=1 00:09:49.220 --rc genhtml_legend=1 00:09:49.220 --rc geninfo_all_blocks=1 00:09:49.220 --rc geninfo_unexecuted_blocks=1 00:09:49.220 00:09:49.220 ' 00:09:49.220 02:20:36 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:49.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.220 --rc genhtml_branch_coverage=1 00:09:49.220 --rc genhtml_function_coverage=1 00:09:49.220 --rc genhtml_legend=1 00:09:49.220 --rc geninfo_all_blocks=1 00:09:49.220 --rc geninfo_unexecuted_blocks=1 00:09:49.220 00:09:49.220 ' 00:09:49.220 02:20:36 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:49.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.220 --rc genhtml_branch_coverage=1 00:09:49.220 --rc genhtml_function_coverage=1 00:09:49.220 --rc genhtml_legend=1 00:09:49.220 --rc geninfo_all_blocks=1 00:09:49.220 --rc geninfo_unexecuted_blocks=1 00:09:49.220 00:09:49.220 ' 00:09:49.220 02:20:36 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:49.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.221 --rc genhtml_branch_coverage=1 00:09:49.221 --rc genhtml_function_coverage=1 00:09:49.221 --rc genhtml_legend=1 00:09:49.221 --rc geninfo_all_blocks=1 00:09:49.221 --rc geninfo_unexecuted_blocks=1 00:09:49.221 00:09:49.221 ' 00:09:49.221 02:20:36 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:49.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:49.741 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:49.741 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:49.741 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:49.741 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:49.741 02:20:36 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:49.741 02:20:36 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:49.741 02:20:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:49.741 02:20:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:49.741 02:20:36 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:49.741 02:20:36 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:49.741 02:20:36 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:49.741 02:20:36 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:50.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.265 Waiting for block devices as requested 00:09:50.265 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.265 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.265 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.527 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.820 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:55.820 02:20:42 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:55.820 02:20:42 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:56.082 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:56.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:56.083 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:56.344 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:56.606 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.606 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.606 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:56.606 02:20:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66603 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:56.867 02:20:43 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:56.867 02:20:43 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:56.867 02:20:43 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:56.867 02:20:43 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:56.867 02:20:43 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:56.867 02:20:43 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:56.867 Initializing NVMe Controllers 00:09:56.867 Attaching to 0000:00:10.0 00:09:56.867 Attaching to 0000:00:11.0 00:09:56.867 Attached to 0000:00:11.0 00:09:56.867 Attached to 0000:00:10.0 00:09:56.867 Initialization complete. Starting I/O... 00:09:56.867 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:56.867 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:56.867 00:09:58.254 QEMU NVMe Ctrl (12341 ): 2377 I/Os completed (+2377) 00:09:58.254 QEMU NVMe Ctrl (12340 ): 2388 I/Os completed (+2388) 00:09:58.254 00:09:59.199 QEMU NVMe Ctrl (12341 ): 5173 I/Os completed (+2796) 00:09:59.199 QEMU NVMe Ctrl (12340 ): 5184 I/Os completed (+2796) 00:09:59.200 00:10:00.144 QEMU NVMe Ctrl (12341 ): 7957 I/Os completed (+2784) 00:10:00.144 QEMU NVMe Ctrl (12340 ): 7968 I/Os completed (+2784) 00:10:00.144 00:10:01.095 QEMU NVMe Ctrl (12341 ): 10821 I/Os completed (+2864) 00:10:01.095 QEMU NVMe Ctrl (12340 ): 10832 I/Os completed (+2864) 00:10:01.095 00:10:02.040 QEMU NVMe Ctrl (12341 ): 13821 I/Os completed (+3000) 00:10:02.040 QEMU NVMe Ctrl (12340 ): 13882 I/Os completed (+3050) 00:10:02.040 00:10:02.984 02:20:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:02.984 02:20:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:02.984 02:20:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:02.984 [2024-11-04 02:20:49.767273] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:02.984 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:02.984 [2024-11-04 02:20:49.768910] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.769645] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.769683] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.769704] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:02.984 [2024-11-04 02:20:49.771821] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.771902] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.771918] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.771935] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 02:20:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:02.984 02:20:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:02.984 [2024-11-04 02:20:49.788826] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:02.984 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:02.984 [2024-11-04 02:20:49.790081] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.790124] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.790144] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.790163] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:02.984 [2024-11-04 02:20:49.792050] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.792103] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.792122] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 [2024-11-04 02:20:49.792136] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.984 02:20:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:02.984 02:20:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:02.984 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:02.984 EAL: Scan for (pci) bus failed. 00:10:02.984 02:20:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:02.984 02:20:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:02.984 02:20:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:02.984 00:10:02.984 02:20:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:02.984 02:20:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:02.984 02:20:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:02.984 02:20:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:02.984 02:20:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:02.984 Attaching to 0000:00:10.0 00:10:02.984 Attached to 0000:00:10.0 00:10:03.246 02:20:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:03.246 02:20:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:03.246 02:20:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:03.246 Attaching to 0000:00:11.0 00:10:03.246 Attached to 0000:00:11.0 00:10:04.189 QEMU NVMe Ctrl (12340 ): 2513 I/Os completed (+2513) 00:10:04.189 QEMU NVMe Ctrl (12341 ): 2274 I/Os completed (+2274) 00:10:04.189 00:10:05.131 QEMU NVMe Ctrl (12340 ): 5289 I/Os completed (+2776) 00:10:05.131 QEMU NVMe Ctrl (12341 ): 5064 I/Os completed (+2790) 00:10:05.131 00:10:06.074 QEMU NVMe Ctrl (12340 ): 8109 I/Os completed (+2820) 00:10:06.074 QEMU NVMe Ctrl (12341 ): 7887 I/Os completed (+2823) 00:10:06.074 00:10:07.020 QEMU NVMe Ctrl (12340 ): 10917 I/Os completed (+2808) 00:10:07.020 QEMU NVMe Ctrl (12341 ): 10692 I/Os completed (+2805) 00:10:07.020 00:10:07.961 QEMU NVMe Ctrl (12340 ): 13993 I/Os completed (+3076) 00:10:07.961 QEMU NVMe Ctrl (12341 ): 13769 I/Os completed (+3077) 00:10:07.961 00:10:08.896 QEMU NVMe Ctrl (12340 ): 17615 I/Os completed (+3622) 00:10:08.896 QEMU NVMe Ctrl (12341 ): 17380 I/Os completed (+3611) 00:10:08.896 00:10:10.269 QEMU NVMe Ctrl (12340 ): 21335 I/Os completed (+3720) 00:10:10.269 QEMU NVMe Ctrl (12341 ): 21109 I/Os completed (+3729) 00:10:10.269 00:10:11.209 QEMU NVMe Ctrl (12340 ): 24961 I/Os completed (+3626) 00:10:11.209 QEMU NVMe Ctrl (12341 ): 24720 I/Os completed (+3611) 00:10:11.209 00:10:12.148 QEMU NVMe Ctrl (12340 ): 27782 I/Os completed (+2821) 00:10:12.148 QEMU NVMe Ctrl (12341 ): 27532 I/Os completed (+2812) 00:10:12.148 00:10:13.081 QEMU NVMe Ctrl (12340 ): 31466 I/Os completed (+3684) 00:10:13.081 QEMU NVMe Ctrl (12341 ): 31211 I/Os completed (+3679) 00:10:13.081 00:10:14.018 QEMU NVMe Ctrl (12340 ): 35139 I/Os completed (+3673) 00:10:14.018 QEMU NVMe Ctrl (12341 ): 34884 I/Os completed (+3673) 00:10:14.018 00:10:14.959 QEMU NVMe Ctrl (12340 ): 38574 I/Os completed (+3435) 00:10:14.959 QEMU NVMe Ctrl (12341 ): 38343 I/Os completed (+3459) 00:10:14.959 00:10:15.218 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:15.218 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:15.218 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.218 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.218 [2024-11-04 02:21:02.117498] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:15.218 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:15.218 [2024-11-04 02:21:02.119370] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.218 [2024-11-04 02:21:02.119566] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.218 [2024-11-04 02:21:02.119611] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.218 [2024-11-04 02:21:02.119692] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.218 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:15.218 [2024-11-04 02:21:02.122379] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.218 [2024-11-04 02:21:02.122579] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.218 [2024-11-04 02:21:02.122605] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.218 [2024-11-04 02:21:02.122621] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.218 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:15.218 EAL: Scan for (pci) bus failed. 00:10:15.218 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.219 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.219 [2024-11-04 02:21:02.143327] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:15.219 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:15.219 [2024-11-04 02:21:02.144663] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.219 [2024-11-04 02:21:02.144723] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.219 [2024-11-04 02:21:02.144747] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.219 [2024-11-04 02:21:02.144765] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.219 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:15.219 [2024-11-04 02:21:02.146737] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.219 [2024-11-04 02:21:02.146789] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.219 [2024-11-04 02:21:02.146806] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.219 [2024-11-04 02:21:02.146822] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.219 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:15.219 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:15.219 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:15.219 EAL: Scan for (pci) bus failed. 00:10:15.219 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.219 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.219 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:15.219 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:15.479 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.479 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.479 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.479 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:15.479 Attaching to 0000:00:10.0 00:10:15.479 Attached to 0000:00:10.0 00:10:15.479 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:15.479 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.479 02:21:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:15.479 Attaching to 0000:00:11.0 00:10:15.479 Attached to 0000:00:11.0 00:10:16.052 QEMU NVMe Ctrl (12340 ): 1745 I/Os completed (+1745) 00:10:16.052 QEMU NVMe Ctrl (12341 ): 1498 I/Os completed (+1498) 00:10:16.052 00:10:16.994 QEMU NVMe Ctrl (12340 ): 4349 I/Os completed (+2604) 00:10:16.995 QEMU NVMe Ctrl (12341 ): 4108 I/Os completed (+2610) 00:10:16.995 00:10:17.970 QEMU NVMe Ctrl (12340 ): 7061 I/Os completed (+2712) 00:10:17.970 QEMU NVMe Ctrl (12341 ): 6820 I/Os completed (+2712) 00:10:17.970 00:10:18.903 QEMU NVMe Ctrl (12340 ): 10375 I/Os completed (+3314) 00:10:18.903 QEMU NVMe Ctrl (12341 ): 9822 I/Os completed (+3002) 00:10:18.903 00:10:20.279 QEMU NVMe Ctrl (12340 ): 13792 I/Os completed (+3417) 00:10:20.279 QEMU NVMe Ctrl (12341 ): 13081 I/Os completed (+3259) 00:10:20.279 00:10:21.215 QEMU NVMe Ctrl (12340 ): 17840 I/Os completed (+4048) 00:10:21.215 QEMU NVMe Ctrl (12341 ): 17132 I/Os completed (+4051) 00:10:21.215 00:10:22.175 QEMU NVMe Ctrl (12340 ): 21856 I/Os completed (+4016) 00:10:22.175 QEMU NVMe Ctrl (12341 ): 21156 I/Os completed (+4024) 00:10:22.175 00:10:23.109 QEMU NVMe Ctrl (12340 ): 25468 I/Os completed (+3612) 00:10:23.109 QEMU NVMe Ctrl (12341 ): 24761 I/Os completed (+3605) 00:10:23.109 00:10:24.057 QEMU NVMe Ctrl (12340 ): 29071 I/Os completed (+3603) 00:10:24.057 QEMU NVMe Ctrl (12341 ): 28372 I/Os completed (+3611) 00:10:24.057 00:10:24.992 QEMU NVMe Ctrl (12340 ): 32749 I/Os completed (+3678) 00:10:24.992 QEMU NVMe Ctrl (12341 ): 32042 I/Os completed (+3670) 00:10:24.992 00:10:25.928 QEMU NVMe Ctrl (12340 ): 36371 I/Os completed (+3622) 00:10:25.928 QEMU NVMe Ctrl (12341 ): 35689 I/Os completed (+3647) 00:10:25.928 00:10:26.864 QEMU NVMe Ctrl (12340 ): 40007 I/Os completed (+3636) 00:10:26.864 QEMU NVMe Ctrl (12341 ): 39324 I/Os completed (+3635) 00:10:26.864 00:10:27.433 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:27.433 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:27.433 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.433 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.433 [2024-11-04 02:21:14.423349] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:27.433 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:27.433 [2024-11-04 02:21:14.425514] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.433 [2024-11-04 02:21:14.425622] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.433 [2024-11-04 02:21:14.425653] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 [2024-11-04 02:21:14.425706] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:27.434 [2024-11-04 02:21:14.427320] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 [2024-11-04 02:21:14.427371] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 [2024-11-04 02:21:14.427394] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 [2024-11-04 02:21:14.427416] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.434 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.434 [2024-11-04 02:21:14.448164] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:27.434 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:27.434 [2024-11-04 02:21:14.449155] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 [2024-11-04 02:21:14.449264] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 [2024-11-04 02:21:14.449295] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 [2024-11-04 02:21:14.449355] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:27.434 [2024-11-04 02:21:14.450771] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 [2024-11-04 02:21:14.450819] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 [2024-11-04 02:21:14.450845] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 [2024-11-04 02:21:14.450876] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.434 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:27.434 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:27.434 EAL: Scan for (pci) bus failed. 00:10:27.434 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:27.434 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:27.434 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:27.434 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:27.695 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:27.695 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:27.695 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:27.695 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:27.695 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:27.695 Attaching to 0000:00:10.0 00:10:27.695 Attached to 0000:00:10.0 00:10:27.695 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:27.695 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:27.695 02:21:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:27.695 Attaching to 0000:00:11.0 00:10:27.695 Attached to 0000:00:11.0 00:10:27.695 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:27.695 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:27.695 [2024-11-04 02:21:14.701934] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:39.939 02:21:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:39.939 02:21:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:39.939 02:21:26 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.94 00:10:39.939 02:21:26 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.94 00:10:39.939 02:21:26 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:39.939 02:21:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.94 00:10:39.939 02:21:26 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.94 2 00:10:39.939 remove_attach_helper took 42.94s to complete (handling 2 nvme drive(s)) 02:21:26 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:46.525 02:21:32 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66603 00:10:46.525 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66603) - No such process 00:10:46.525 02:21:32 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66603 00:10:46.525 02:21:32 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:46.525 02:21:32 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:46.525 02:21:32 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:46.525 02:21:32 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67147 00:10:46.525 02:21:32 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:46.525 02:21:32 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67147 00:10:46.525 02:21:32 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 67147 ']' 00:10:46.525 02:21:32 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:46.525 02:21:32 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.525 02:21:32 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:46.525 02:21:32 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.525 02:21:32 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:46.525 02:21:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.525 [2024-11-04 02:21:32.797790] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:10:46.525 [2024-11-04 02:21:32.798131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67147 ] 00:10:46.525 [2024-11-04 02:21:32.963172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.525 [2024-11-04 02:21:33.083292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.787 02:21:33 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:46.787 02:21:33 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:10:46.787 02:21:33 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:46.787 02:21:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.787 02:21:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.787 02:21:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.787 02:21:33 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:46.787 02:21:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:46.787 02:21:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:46.787 02:21:33 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:46.787 02:21:33 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:46.787 02:21:33 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:46.787 02:21:33 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:46.787 02:21:33 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:46.787 02:21:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:46.787 02:21:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:46.787 02:21:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:46.787 02:21:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:46.787 02:21:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.357 02:21:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.357 02:21:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.357 02:21:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:53.357 02:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:53.357 [2024-11-04 02:21:39.883215] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:53.357 [2024-11-04 02:21:39.884432] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.357 [2024-11-04 02:21:39.884469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.357 [2024-11-04 02:21:39.884482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.357 [2024-11-04 02:21:39.884499] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.357 [2024-11-04 02:21:39.884507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.357 [2024-11-04 02:21:39.884515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.357 [2024-11-04 02:21:39.884522] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.357 [2024-11-04 02:21:39.884530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.357 [2024-11-04 02:21:39.884536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.357 [2024-11-04 02:21:39.884546] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.357 [2024-11-04 02:21:39.884553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.357 [2024-11-04 02:21:39.884561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.357 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:53.357 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.357 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.357 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.357 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.357 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.357 02:21:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.357 02:21:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.357 02:21:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.357 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:53.357 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:53.616 [2024-11-04 02:21:40.483216] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:53.616 [2024-11-04 02:21:40.484390] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.616 [2024-11-04 02:21:40.484421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.616 [2024-11-04 02:21:40.484432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.616 [2024-11-04 02:21:40.484444] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.616 [2024-11-04 02:21:40.484452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.616 [2024-11-04 02:21:40.484459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.616 [2024-11-04 02:21:40.484468] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.616 [2024-11-04 02:21:40.484475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.616 [2024-11-04 02:21:40.484482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.616 [2024-11-04 02:21:40.484489] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.616 [2024-11-04 02:21:40.484497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.616 [2024-11-04 02:21:40.484503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.874 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:53.874 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.874 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.874 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.874 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.874 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.874 02:21:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.874 02:21:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.874 02:21:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.874 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:53.874 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:54.132 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.132 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.132 02:21:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:54.132 02:21:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:54.132 02:21:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.132 02:21:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.132 02:21:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.132 02:21:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:54.132 02:21:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:54.132 02:21:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.132 02:21:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.328 02:21:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.328 02:21:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.328 02:21:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.328 02:21:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.328 02:21:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.328 02:21:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:06.328 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:06.328 [2024-11-04 02:21:53.283432] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.328 [2024-11-04 02:21:53.284727] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.328 [2024-11-04 02:21:53.284825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.328 [2024-11-04 02:21:53.284890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.328 [2024-11-04 02:21:53.284929] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.328 [2024-11-04 02:21:53.284946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.328 [2024-11-04 02:21:53.284971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.328 [2024-11-04 02:21:53.284995] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.328 [2024-11-04 02:21:53.285013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.328 [2024-11-04 02:21:53.285079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.328 [2024-11-04 02:21:53.285107] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.328 [2024-11-04 02:21:53.285123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.328 [2024-11-04 02:21:53.285176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.586 [2024-11-04 02:21:53.683437] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:06.586 [2024-11-04 02:21:53.684660] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.586 [2024-11-04 02:21:53.684757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.586 [2024-11-04 02:21:53.684820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.586 [2024-11-04 02:21:53.684849] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.586 [2024-11-04 02:21:53.684880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.586 [2024-11-04 02:21:53.684905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.586 [2024-11-04 02:21:53.684930] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.586 [2024-11-04 02:21:53.684946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.586 [2024-11-04 02:21:53.685001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.586 [2024-11-04 02:21:53.685026] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.586 [2024-11-04 02:21:53.685043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.586 [2024-11-04 02:21:53.685125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.844 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:06.844 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.844 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.844 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.844 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.844 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.844 02:21:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.844 02:21:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.845 02:21:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.845 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:06.845 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:06.845 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.845 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.845 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:06.845 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:06.845 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.845 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.845 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.845 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:07.103 02:21:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:07.103 02:21:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.103 02:21:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.299 02:22:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.299 02:22:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.299 02:22:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.299 [2024-11-04 02:22:06.083665] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:19.299 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.299 EAL: Scan for (pci) bus failed. 00:11:19.299 [2024-11-04 02:22:06.085107] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.299 [2024-11-04 02:22:06.085132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.299 [2024-11-04 02:22:06.085141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.299 [2024-11-04 02:22:06.085158] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.299 [2024-11-04 02:22:06.085166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.299 [2024-11-04 02:22:06.085175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.299 [2024-11-04 02:22:06.085183] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.299 [2024-11-04 02:22:06.085190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.299 [2024-11-04 02:22:06.085197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.299 [2024-11-04 02:22:06.085205] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.299 [2024-11-04 02:22:06.085212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.299 [2024-11-04 02:22:06.085220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.299 02:22:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.299 02:22:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.299 02:22:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:19.299 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:19.559 [2024-11-04 02:22:06.583663] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:19.559 [2024-11-04 02:22:06.584907] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.559 [2024-11-04 02:22:06.584934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.559 [2024-11-04 02:22:06.584945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.559 [2024-11-04 02:22:06.584958] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.559 [2024-11-04 02:22:06.584967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.559 [2024-11-04 02:22:06.584974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.559 [2024-11-04 02:22:06.584982] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.559 [2024-11-04 02:22:06.584988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.559 [2024-11-04 02:22:06.584998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.559 [2024-11-04 02:22:06.585005] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.559 [2024-11-04 02:22:06.585012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.559 [2024-11-04 02:22:06.585019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.559 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:19.559 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.559 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.559 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.559 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.559 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.559 02:22:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.559 02:22:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.559 02:22:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.559 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:19.559 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:19.817 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.817 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.817 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:19.817 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:19.817 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.817 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.817 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.817 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:19.817 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:19.817 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.817 02:22:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:32.046 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:32.046 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:32.046 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:32.046 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.10 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.10 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.10 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.10 2 00:11:32.047 remove_attach_helper took 45.10s to complete (handling 2 nvme drive(s)) 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:32.047 02:22:18 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:32.047 02:22:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.625 02:22:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.625 02:22:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.625 02:22:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:38.625 02:22:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:38.625 [2024-11-04 02:22:25.011144] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:38.625 [2024-11-04 02:22:25.012246] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.625 [2024-11-04 02:22:25.012279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.625 [2024-11-04 02:22:25.012290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.625 [2024-11-04 02:22:25.012307] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.625 [2024-11-04 02:22:25.012315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.625 [2024-11-04 02:22:25.012323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.625 [2024-11-04 02:22:25.012330] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.625 [2024-11-04 02:22:25.012338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.625 [2024-11-04 02:22:25.012345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.625 [2024-11-04 02:22:25.012353] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.625 [2024-11-04 02:22:25.012360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.625 [2024-11-04 02:22:25.012369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.625 02:22:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:38.625 02:22:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.625 02:22:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.625 02:22:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.625 02:22:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.625 02:22:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.625 02:22:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.625 02:22:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.625 [2024-11-04 02:22:25.511140] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:38.625 [2024-11-04 02:22:25.512015] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.625 [2024-11-04 02:22:25.512043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.625 [2024-11-04 02:22:25.512054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.625 [2024-11-04 02:22:25.512067] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.625 [2024-11-04 02:22:25.512076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.625 [2024-11-04 02:22:25.512083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.625 [2024-11-04 02:22:25.512091] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.625 [2024-11-04 02:22:25.512097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.625 [2024-11-04 02:22:25.512105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.625 [2024-11-04 02:22:25.512112] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.625 [2024-11-04 02:22:25.512120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.625 [2024-11-04 02:22:25.512126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.625 02:22:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.625 02:22:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:38.625 02:22:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.191 02:22:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.191 02:22:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.191 02:22:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:39.191 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.449 02:22:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.648 02:22:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.648 02:22:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.648 02:22:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.648 02:22:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.648 02:22:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.648 02:22:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:51.648 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:51.648 [2024-11-04 02:22:38.411365] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:51.648 [2024-11-04 02:22:38.412246] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.648 [2024-11-04 02:22:38.412281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.648 [2024-11-04 02:22:38.412291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.648 [2024-11-04 02:22:38.412307] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.648 [2024-11-04 02:22:38.412315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.648 [2024-11-04 02:22:38.412323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.648 [2024-11-04 02:22:38.412330] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.648 [2024-11-04 02:22:38.412338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.648 [2024-11-04 02:22:38.412345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.648 [2024-11-04 02:22:38.412353] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.648 [2024-11-04 02:22:38.412360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.649 [2024-11-04 02:22:38.412367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.907 [2024-11-04 02:22:38.811366] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:51.907 [2024-11-04 02:22:38.812224] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.907 [2024-11-04 02:22:38.812248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.907 [2024-11-04 02:22:38.812258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.907 [2024-11-04 02:22:38.812269] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.907 [2024-11-04 02:22:38.812279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.907 [2024-11-04 02:22:38.812286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.907 [2024-11-04 02:22:38.812295] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.907 [2024-11-04 02:22:38.812301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.907 [2024-11-04 02:22:38.812310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.907 [2024-11-04 02:22:38.812317] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.907 [2024-11-04 02:22:38.812325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.907 [2024-11-04 02:22:38.812331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.907 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:51.907 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.907 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.907 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.907 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.907 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.907 02:22:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.907 02:22:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.907 02:22:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.907 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:51.907 02:22:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:51.907 02:22:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:51.907 02:22:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:51.907 02:22:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:52.165 02:22:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:52.165 02:22:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.165 02:22:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.165 02:22:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.165 02:22:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:52.165 02:22:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:52.165 02:22:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.165 02:22:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.366 02:22:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.366 02:22:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.366 02:22:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.366 02:22:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.366 02:22:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.366 02:22:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:04.366 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:04.366 [2024-11-04 02:22:51.311586] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:04.366 [2024-11-04 02:22:51.312511] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.366 [2024-11-04 02:22:51.312543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.366 [2024-11-04 02:22:51.312554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.366 [2024-11-04 02:22:51.312572] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.366 [2024-11-04 02:22:51.312579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.366 [2024-11-04 02:22:51.312587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.366 [2024-11-04 02:22:51.312594] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.366 [2024-11-04 02:22:51.312605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.366 [2024-11-04 02:22:51.312612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.366 [2024-11-04 02:22:51.312620] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.367 [2024-11-04 02:22:51.312626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.367 [2024-11-04 02:22:51.312634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.625 [2024-11-04 02:22:51.711583] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:04.625 [2024-11-04 02:22:51.712471] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.625 [2024-11-04 02:22:51.712498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.625 [2024-11-04 02:22:51.712510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.625 [2024-11-04 02:22:51.712521] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.625 [2024-11-04 02:22:51.712529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.625 [2024-11-04 02:22:51.712537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.625 [2024-11-04 02:22:51.712548] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.625 [2024-11-04 02:22:51.712554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.625 [2024-11-04 02:22:51.712562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.625 [2024-11-04 02:22:51.712569] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.625 [2024-11-04 02:22:51.712579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.625 [2024-11-04 02:22:51.712585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.883 02:22:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.883 02:22:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.883 02:22:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:04.883 02:22:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:05.140 02:22:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:05.140 02:22:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.140 02:22:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:17.379 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:17.379 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:17.379 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:17.379 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:17.379 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:17.379 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:17.379 02:23:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 02:23:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 02:23:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:17.379 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:17.379 02:23:04 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.17 00:12:17.379 02:23:04 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.17 00:12:17.379 02:23:04 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:17.379 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:12:17.379 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:12:17.379 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:17.379 02:23:04 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67147 00:12:17.379 02:23:04 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 67147 ']' 00:12:17.379 02:23:04 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 67147 00:12:17.379 02:23:04 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:12:17.379 02:23:04 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:17.380 02:23:04 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67147 00:12:17.380 02:23:04 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:17.380 02:23:04 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:17.380 killing process with pid 67147 00:12:17.380 02:23:04 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67147' 00:12:17.380 02:23:04 sw_hotplug -- common/autotest_common.sh@971 -- # kill 67147 00:12:17.380 02:23:04 sw_hotplug -- common/autotest_common.sh@976 -- # wait 67147 00:12:18.315 02:23:05 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:18.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:19.148 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:19.148 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:19.148 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:19.148 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:19.148 00:12:19.148 real 2m30.163s 00:12:19.148 user 1m51.788s 00:12:19.148 sys 0m16.837s 00:12:19.148 02:23:06 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:19.148 02:23:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.148 ************************************ 00:12:19.148 END TEST sw_hotplug 00:12:19.148 ************************************ 00:12:19.148 02:23:06 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:19.148 02:23:06 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:19.148 02:23:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:19.148 02:23:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:19.148 02:23:06 -- common/autotest_common.sh@10 -- # set +x 00:12:19.410 ************************************ 00:12:19.410 START TEST nvme_xnvme 00:12:19.410 ************************************ 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:19.410 * Looking for test storage... 00:12:19.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:19.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.410 --rc genhtml_branch_coverage=1 00:12:19.410 --rc genhtml_function_coverage=1 00:12:19.410 --rc genhtml_legend=1 00:12:19.410 --rc geninfo_all_blocks=1 00:12:19.410 --rc geninfo_unexecuted_blocks=1 00:12:19.410 00:12:19.410 ' 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:19.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.410 --rc genhtml_branch_coverage=1 00:12:19.410 --rc genhtml_function_coverage=1 00:12:19.410 --rc genhtml_legend=1 00:12:19.410 --rc geninfo_all_blocks=1 00:12:19.410 --rc geninfo_unexecuted_blocks=1 00:12:19.410 00:12:19.410 ' 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:19.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.410 --rc genhtml_branch_coverage=1 00:12:19.410 --rc genhtml_function_coverage=1 00:12:19.410 --rc genhtml_legend=1 00:12:19.410 --rc geninfo_all_blocks=1 00:12:19.410 --rc geninfo_unexecuted_blocks=1 00:12:19.410 00:12:19.410 ' 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:19.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.410 --rc genhtml_branch_coverage=1 00:12:19.410 --rc genhtml_function_coverage=1 00:12:19.410 --rc genhtml_legend=1 00:12:19.410 --rc geninfo_all_blocks=1 00:12:19.410 --rc geninfo_unexecuted_blocks=1 00:12:19.410 00:12:19.410 ' 00:12:19.410 02:23:06 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.410 02:23:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.410 02:23:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.410 02:23:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.410 02:23:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.410 02:23:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:19.410 02:23:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.410 02:23:06 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:19.410 02:23:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:19.410 ************************************ 00:12:19.410 START TEST xnvme_to_malloc_dd_copy 00:12:19.410 ************************************ 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:19.410 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:19.411 02:23:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:19.411 { 00:12:19.411 "subsystems": [ 00:12:19.411 { 00:12:19.411 "subsystem": "bdev", 00:12:19.411 "config": [ 00:12:19.411 { 00:12:19.411 "params": { 00:12:19.411 "block_size": 512, 00:12:19.411 "num_blocks": 2097152, 00:12:19.411 "name": "malloc0" 00:12:19.411 }, 00:12:19.411 "method": "bdev_malloc_create" 00:12:19.411 }, 00:12:19.411 { 00:12:19.411 "params": { 00:12:19.411 "io_mechanism": "libaio", 00:12:19.411 "filename": "/dev/nullb0", 00:12:19.411 "name": "null0" 00:12:19.411 }, 00:12:19.411 "method": "bdev_xnvme_create" 00:12:19.411 }, 00:12:19.411 { 00:12:19.411 "method": "bdev_wait_for_examine" 00:12:19.411 } 00:12:19.411 ] 00:12:19.411 } 00:12:19.411 ] 00:12:19.411 } 00:12:19.670 [2024-11-04 02:23:06.545522] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:19.670 [2024-11-04 02:23:06.545664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68529 ] 00:12:19.670 [2024-11-04 02:23:06.705749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.928 [2024-11-04 02:23:06.796160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.830  [2024-11-04T02:23:09.876Z] Copying: 300/1024 [MB] (300 MBps) [2024-11-04T02:23:10.811Z] Copying: 602/1024 [MB] (301 MBps) [2024-11-04T02:23:11.069Z] Copying: 903/1024 [MB] (301 MBps) [2024-11-04T02:23:12.971Z] Copying: 1024/1024 [MB] (average 301 MBps) 00:12:25.860 00:12:25.860 02:23:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:25.860 02:23:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:25.860 02:23:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:25.860 02:23:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:25.860 { 00:12:25.860 "subsystems": [ 00:12:25.860 { 00:12:25.860 "subsystem": "bdev", 00:12:25.860 "config": [ 00:12:25.860 { 00:12:25.860 "params": { 00:12:25.860 "block_size": 512, 00:12:25.860 "num_blocks": 2097152, 00:12:25.860 "name": "malloc0" 00:12:25.860 }, 00:12:25.860 "method": "bdev_malloc_create" 00:12:25.860 }, 00:12:25.860 { 00:12:25.860 "params": { 00:12:25.860 "io_mechanism": "libaio", 00:12:25.860 "filename": "/dev/nullb0", 00:12:25.860 "name": "null0" 00:12:25.860 }, 00:12:25.860 "method": "bdev_xnvme_create" 00:12:25.860 }, 00:12:25.860 { 00:12:25.860 "method": "bdev_wait_for_examine" 00:12:25.860 } 00:12:25.860 ] 00:12:25.860 } 00:12:25.860 ] 00:12:25.860 } 00:12:25.860 [2024-11-04 02:23:12.933808] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:25.860 [2024-11-04 02:23:12.933916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68612 ] 00:12:26.119 [2024-11-04 02:23:13.082363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.119 [2024-11-04 02:23:13.159881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.024  [2024-11-04T02:23:16.070Z] Copying: 304/1024 [MB] (304 MBps) [2024-11-04T02:23:17.005Z] Copying: 609/1024 [MB] (304 MBps) [2024-11-04T02:23:17.572Z] Copying: 914/1024 [MB] (305 MBps) [2024-11-04T02:23:19.475Z] Copying: 1024/1024 [MB] (average 304 MBps) 00:12:32.364 00:12:32.364 02:23:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:32.364 02:23:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:32.364 02:23:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:32.364 02:23:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:32.364 02:23:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:32.364 02:23:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:32.364 { 00:12:32.364 "subsystems": [ 00:12:32.364 { 00:12:32.364 "subsystem": "bdev", 00:12:32.364 "config": [ 00:12:32.364 { 00:12:32.364 "params": { 00:12:32.364 "block_size": 512, 00:12:32.364 "num_blocks": 2097152, 00:12:32.364 "name": "malloc0" 00:12:32.364 }, 00:12:32.364 "method": "bdev_malloc_create" 00:12:32.364 }, 00:12:32.364 { 00:12:32.364 "params": { 00:12:32.364 "io_mechanism": "io_uring", 00:12:32.364 "filename": "/dev/nullb0", 00:12:32.364 "name": "null0" 00:12:32.364 }, 00:12:32.364 "method": "bdev_xnvme_create" 00:12:32.364 }, 00:12:32.364 { 00:12:32.364 "method": "bdev_wait_for_examine" 00:12:32.364 } 00:12:32.364 ] 00:12:32.364 } 00:12:32.364 ] 00:12:32.364 } 00:12:32.364 [2024-11-04 02:23:19.288429] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:32.364 [2024-11-04 02:23:19.288548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68688 ] 00:12:32.364 [2024-11-04 02:23:19.445453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.635 [2024-11-04 02:23:19.532879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.593  [2024-11-04T02:23:22.636Z] Copying: 309/1024 [MB] (309 MBps) [2024-11-04T02:23:23.571Z] Copying: 620/1024 [MB] (311 MBps) [2024-11-04T02:23:23.830Z] Copying: 931/1024 [MB] (310 MBps) [2024-11-04T02:23:25.732Z] Copying: 1024/1024 [MB] (average 310 MBps) 00:12:38.621 00:12:38.621 02:23:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:38.621 02:23:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:38.621 02:23:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:38.621 02:23:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:38.621 { 00:12:38.621 "subsystems": [ 00:12:38.621 { 00:12:38.621 "subsystem": "bdev", 00:12:38.621 "config": [ 00:12:38.621 { 00:12:38.621 "params": { 00:12:38.621 "block_size": 512, 00:12:38.621 "num_blocks": 2097152, 00:12:38.621 "name": "malloc0" 00:12:38.621 }, 00:12:38.621 "method": "bdev_malloc_create" 00:12:38.621 }, 00:12:38.621 { 00:12:38.621 "params": { 00:12:38.621 "io_mechanism": "io_uring", 00:12:38.621 "filename": "/dev/nullb0", 00:12:38.621 "name": "null0" 00:12:38.621 }, 00:12:38.621 "method": "bdev_xnvme_create" 00:12:38.621 }, 00:12:38.621 { 00:12:38.621 "method": "bdev_wait_for_examine" 00:12:38.621 } 00:12:38.621 ] 00:12:38.621 } 00:12:38.621 ] 00:12:38.621 } 00:12:38.621 [2024-11-04 02:23:25.546048] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:38.621 [2024-11-04 02:23:25.546137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68764 ] 00:12:38.621 [2024-11-04 02:23:25.695515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.879 [2024-11-04 02:23:25.772140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.782  [2024-11-04T02:23:28.829Z] Copying: 315/1024 [MB] (315 MBps) [2024-11-04T02:23:29.762Z] Copying: 632/1024 [MB] (316 MBps) [2024-11-04T02:23:30.020Z] Copying: 947/1024 [MB] (315 MBps) [2024-11-04T02:23:31.923Z] Copying: 1024/1024 [MB] (average 316 MBps) 00:12:44.812 00:12:44.812 02:23:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:44.812 02:23:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:44.812 00:12:44.812 real 0m25.225s 00:12:44.812 user 0m22.303s 00:12:44.812 sys 0m2.418s 00:12:44.812 02:23:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:44.812 ************************************ 00:12:44.812 END TEST xnvme_to_malloc_dd_copy 00:12:44.812 ************************************ 00:12:44.812 02:23:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:44.812 02:23:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:44.812 02:23:31 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:44.812 02:23:31 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:44.812 02:23:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.812 ************************************ 00:12:44.812 START TEST xnvme_bdevperf 00:12:44.812 ************************************ 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:44.812 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:44.813 02:23:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:44.813 02:23:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:44.813 02:23:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:44.813 { 00:12:44.813 "subsystems": [ 00:12:44.813 { 00:12:44.813 "subsystem": "bdev", 00:12:44.813 "config": [ 00:12:44.813 { 00:12:44.813 "params": { 00:12:44.813 "io_mechanism": "libaio", 00:12:44.813 "filename": "/dev/nullb0", 00:12:44.813 "name": "null0" 00:12:44.813 }, 00:12:44.813 "method": "bdev_xnvme_create" 00:12:44.813 }, 00:12:44.813 { 00:12:44.813 "method": "bdev_wait_for_examine" 00:12:44.813 } 00:12:44.813 ] 00:12:44.813 } 00:12:44.813 ] 00:12:44.813 } 00:12:44.813 [2024-11-04 02:23:31.815520] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:44.813 [2024-11-04 02:23:31.815649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68863 ] 00:12:45.074 [2024-11-04 02:23:31.979335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.074 [2024-11-04 02:23:32.099061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.335 Running I/O for 5 seconds... 00:12:47.294 152000.00 IOPS, 593.75 MiB/s [2024-11-04T02:23:35.779Z] 156192.00 IOPS, 610.12 MiB/s [2024-11-04T02:23:36.713Z] 170517.33 IOPS, 666.08 MiB/s [2024-11-04T02:23:37.647Z] 177808.00 IOPS, 694.56 MiB/s [2024-11-04T02:23:37.647Z] 182208.00 IOPS, 711.75 MiB/s 00:12:50.536 Latency(us) 00:12:50.536 [2024-11-04T02:23:37.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.536 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:50.536 null0 : 5.00 182154.13 711.54 0.00 0.00 348.92 313.50 2054.30 00:12:50.536 [2024-11-04T02:23:37.647Z] =================================================================================================================== 00:12:50.536 [2024-11-04T02:23:37.648Z] Total : 182154.13 711.54 0.00 0.00 348.92 313.50 2054.30 00:12:51.106 02:23:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:51.106 02:23:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:51.106 02:23:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:51.106 02:23:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:51.106 02:23:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:51.106 02:23:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:51.106 { 00:12:51.106 "subsystems": [ 00:12:51.106 { 00:12:51.106 "subsystem": "bdev", 00:12:51.106 "config": [ 00:12:51.106 { 00:12:51.106 "params": { 00:12:51.106 "io_mechanism": "io_uring", 00:12:51.106 "filename": "/dev/nullb0", 00:12:51.106 "name": "null0" 00:12:51.106 }, 00:12:51.106 "method": "bdev_xnvme_create" 00:12:51.106 }, 00:12:51.106 { 00:12:51.106 "method": "bdev_wait_for_examine" 00:12:51.106 } 00:12:51.106 ] 00:12:51.106 } 00:12:51.106 ] 00:12:51.106 } 00:12:51.106 [2024-11-04 02:23:38.014831] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:51.106 [2024-11-04 02:23:38.014963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68936 ] 00:12:51.106 [2024-11-04 02:23:38.175233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.371 [2024-11-04 02:23:38.299635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.664 Running I/O for 5 seconds... 00:12:53.540 177024.00 IOPS, 691.50 MiB/s [2024-11-04T02:23:41.585Z] 200096.00 IOPS, 781.62 MiB/s [2024-11-04T02:23:42.960Z] 210560.00 IOPS, 822.50 MiB/s [2024-11-04T02:23:43.896Z] 215936.00 IOPS, 843.50 MiB/s [2024-11-04T02:23:43.896Z] 219148.80 IOPS, 856.05 MiB/s 00:12:56.785 Latency(us) 00:12:56.785 [2024-11-04T02:23:43.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.785 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:56.785 null0 : 5.00 219079.93 855.78 0.00 0.00 289.98 269.39 1991.29 00:12:56.785 [2024-11-04T02:23:43.896Z] =================================================================================================================== 00:12:56.785 [2024-11-04T02:23:43.896Z] Total : 219079.93 855.78 0.00 0.00 289.98 269.39 1991.29 00:12:57.045 02:23:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:57.045 02:23:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:57.305 00:12:57.305 real 0m12.430s 00:12:57.305 user 0m9.977s 00:12:57.305 sys 0m2.217s 00:12:57.305 ************************************ 00:12:57.305 END TEST xnvme_bdevperf 00:12:57.305 ************************************ 00:12:57.305 02:23:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:57.305 02:23:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.305 00:12:57.305 real 0m37.934s 00:12:57.305 user 0m32.396s 00:12:57.305 sys 0m4.760s 00:12:57.305 02:23:44 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:57.305 ************************************ 00:12:57.305 END TEST nvme_xnvme 00:12:57.305 ************************************ 00:12:57.305 02:23:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.305 02:23:44 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:57.305 02:23:44 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:57.305 02:23:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:57.305 02:23:44 -- common/autotest_common.sh@10 -- # set +x 00:12:57.305 ************************************ 00:12:57.305 START TEST blockdev_xnvme 00:12:57.305 ************************************ 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:57.305 * Looking for test storage... 00:12:57.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.305 02:23:44 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:57.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.305 --rc genhtml_branch_coverage=1 00:12:57.305 --rc genhtml_function_coverage=1 00:12:57.305 --rc genhtml_legend=1 00:12:57.305 --rc geninfo_all_blocks=1 00:12:57.305 --rc geninfo_unexecuted_blocks=1 00:12:57.305 00:12:57.305 ' 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:57.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.305 --rc genhtml_branch_coverage=1 00:12:57.305 --rc genhtml_function_coverage=1 00:12:57.305 --rc genhtml_legend=1 00:12:57.305 --rc geninfo_all_blocks=1 00:12:57.305 --rc geninfo_unexecuted_blocks=1 00:12:57.305 00:12:57.305 ' 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:57.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.305 --rc genhtml_branch_coverage=1 00:12:57.305 --rc genhtml_function_coverage=1 00:12:57.305 --rc genhtml_legend=1 00:12:57.305 --rc geninfo_all_blocks=1 00:12:57.305 --rc geninfo_unexecuted_blocks=1 00:12:57.305 00:12:57.305 ' 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:57.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.305 --rc genhtml_branch_coverage=1 00:12:57.305 --rc genhtml_function_coverage=1 00:12:57.305 --rc genhtml_legend=1 00:12:57.305 --rc geninfo_all_blocks=1 00:12:57.305 --rc geninfo_unexecuted_blocks=1 00:12:57.305 00:12:57.305 ' 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69085 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69085 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 69085 ']' 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:57.305 02:23:44 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:57.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.305 02:23:44 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:57.306 02:23:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.564 [2024-11-04 02:23:44.472809] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:12:57.565 [2024-11-04 02:23:44.472940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69085 ] 00:12:57.565 [2024-11-04 02:23:44.629525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.823 [2024-11-04 02:23:44.711827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.390 02:23:45 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:58.390 02:23:45 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:12:58.390 02:23:45 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:58.390 02:23:45 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:58.391 02:23:45 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:58.391 02:23:45 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:58.391 02:23:45 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:58.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:58.649 Waiting for block devices as requested 00:12:58.649 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.908 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.908 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.908 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.175 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.176 02:23:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.176 02:23:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:04.176 nvme0n1 00:13:04.176 nvme1n1 00:13:04.176 nvme2n1 00:13:04.176 nvme2n2 00:13:04.176 nvme2n3 00:13:04.176 nvme3n1 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.176 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.176 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:04.176 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.176 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.176 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.176 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:04.176 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.176 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:04.176 02:23:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.176 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:04.176 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:04.177 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0a16d3f0-8428-425c-94c0-e3e2fb63d664"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0a16d3f0-8428-425c-94c0-e3e2fb63d664",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cbdd7d89-9396-424a-995a-d58bb5f9ad48"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cbdd7d89-9396-424a-995a-d58bb5f9ad48",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ca1ad202-2c6a-4c71-ba9c-3c23be9dd421"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ca1ad202-2c6a-4c71-ba9c-3c23be9dd421",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "a968bb52-2c9b-4b5a-adcf-395cc6902fa7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a968bb52-2c9b-4b5a-adcf-395cc6902fa7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "1f800f44-7247-4a79-b550-6823852f032f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1f800f44-7247-4a79-b550-6823852f032f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "38ec6695-6d27-42bc-8ba4-7495b3a8b04c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "38ec6695-6d27-42bc-8ba4-7495b3a8b04c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:04.177 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:04.177 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:04.177 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:04.177 02:23:51 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69085 00:13:04.177 02:23:51 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 69085 ']' 00:13:04.177 02:23:51 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 69085 00:13:04.177 02:23:51 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:13:04.177 02:23:51 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:04.177 02:23:51 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69085 00:13:04.177 killing process with pid 69085 00:13:04.177 02:23:51 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:04.177 02:23:51 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:04.177 02:23:51 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69085' 00:13:04.177 02:23:51 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 69085 00:13:04.177 02:23:51 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 69085 00:13:05.552 02:23:52 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:05.552 02:23:52 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:05.552 02:23:52 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:05.552 02:23:52 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:05.552 02:23:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:05.552 ************************************ 00:13:05.552 START TEST bdev_hello_world 00:13:05.552 ************************************ 00:13:05.552 02:23:52 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:05.552 [2024-11-04 02:23:52.415075] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:05.552 [2024-11-04 02:23:52.415212] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69435 ] 00:13:05.553 [2024-11-04 02:23:52.570568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.553 [2024-11-04 02:23:52.654240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.120 [2024-11-04 02:23:52.936352] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:06.120 [2024-11-04 02:23:52.936509] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:06.120 [2024-11-04 02:23:52.936526] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:06.120 [2024-11-04 02:23:52.937986] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:06.120 [2024-11-04 02:23:52.938361] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:06.120 [2024-11-04 02:23:52.938383] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:06.120 [2024-11-04 02:23:52.938578] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:06.120 00:13:06.120 [2024-11-04 02:23:52.938590] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:06.381 00:13:06.381 ************************************ 00:13:06.381 END TEST bdev_hello_world 00:13:06.381 ************************************ 00:13:06.381 real 0m1.121s 00:13:06.381 user 0m0.859s 00:13:06.381 sys 0m0.150s 00:13:06.381 02:23:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:06.381 02:23:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:06.640 02:23:53 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:06.640 02:23:53 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:06.640 02:23:53 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:06.640 02:23:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.640 ************************************ 00:13:06.640 START TEST bdev_bounds 00:13:06.640 ************************************ 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69466 00:13:06.640 Process bdevio pid: 69466 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69466' 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69466 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 69466 ']' 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:06.640 02:23:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:06.640 [2024-11-04 02:23:53.586424] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:06.640 [2024-11-04 02:23:53.586512] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69466 ] 00:13:06.640 [2024-11-04 02:23:53.736092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.898 [2024-11-04 02:23:53.816401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.898 [2024-11-04 02:23:53.816607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.898 [2024-11-04 02:23:53.816612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.465 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:07.465 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:13:07.465 02:23:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:07.465 I/O targets: 00:13:07.465 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:07.465 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:07.465 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:07.465 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:07.465 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:07.465 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:07.465 00:13:07.465 00:13:07.465 CUnit - A unit testing framework for C - Version 2.1-3 00:13:07.465 http://cunit.sourceforge.net/ 00:13:07.465 00:13:07.465 00:13:07.465 Suite: bdevio tests on: nvme3n1 00:13:07.465 Test: blockdev write read block ...passed 00:13:07.465 Test: blockdev write zeroes read block ...passed 00:13:07.465 Test: blockdev write zeroes read no split ...passed 00:13:07.465 Test: blockdev write zeroes read split ...passed 00:13:07.465 Test: blockdev write zeroes read split partial ...passed 00:13:07.465 Test: blockdev reset ...passed 00:13:07.465 Test: blockdev write read 8 blocks ...passed 00:13:07.465 Test: blockdev write read size > 128k ...passed 00:13:07.465 Test: blockdev write read invalid size ...passed 00:13:07.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:07.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:07.465 Test: blockdev write read max offset ...passed 00:13:07.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:07.465 Test: blockdev writev readv 8 blocks ...passed 00:13:07.465 Test: blockdev writev readv 30 x 1block ...passed 00:13:07.465 Test: blockdev writev readv block ...passed 00:13:07.465 Test: blockdev writev readv size > 128k ...passed 00:13:07.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:07.465 Test: blockdev comparev and writev ...passed 00:13:07.465 Test: blockdev nvme passthru rw ...passed 00:13:07.465 Test: blockdev nvme passthru vendor specific ...passed 00:13:07.465 Test: blockdev nvme admin passthru ...passed 00:13:07.465 Test: blockdev copy ...passed 00:13:07.465 Suite: bdevio tests on: nvme2n3 00:13:07.465 Test: blockdev write read block ...passed 00:13:07.465 Test: blockdev write zeroes read block ...passed 00:13:07.724 Test: blockdev write zeroes read no split ...passed 00:13:07.724 Test: blockdev write zeroes read split ...passed 00:13:07.724 Test: blockdev write zeroes read split partial ...passed 00:13:07.724 Test: blockdev reset ...passed 00:13:07.724 Test: blockdev write read 8 blocks ...passed 00:13:07.724 Test: blockdev write read size > 128k ...passed 00:13:07.724 Test: blockdev write read invalid size ...passed 00:13:07.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:07.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:07.724 Test: blockdev write read max offset ...passed 00:13:07.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:07.724 Test: blockdev writev readv 8 blocks ...passed 00:13:07.724 Test: blockdev writev readv 30 x 1block ...passed 00:13:07.724 Test: blockdev writev readv block ...passed 00:13:07.724 Test: blockdev writev readv size > 128k ...passed 00:13:07.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:07.724 Test: blockdev comparev and writev ...passed 00:13:07.724 Test: blockdev nvme passthru rw ...passed 00:13:07.724 Test: blockdev nvme passthru vendor specific ...passed 00:13:07.724 Test: blockdev nvme admin passthru ...passed 00:13:07.724 Test: blockdev copy ...passed 00:13:07.724 Suite: bdevio tests on: nvme2n2 00:13:07.724 Test: blockdev write read block ...passed 00:13:07.724 Test: blockdev write zeroes read block ...passed 00:13:07.724 Test: blockdev write zeroes read no split ...passed 00:13:07.724 Test: blockdev write zeroes read split ...passed 00:13:07.724 Test: blockdev write zeroes read split partial ...passed 00:13:07.724 Test: blockdev reset ...passed 00:13:07.724 Test: blockdev write read 8 blocks ...passed 00:13:07.724 Test: blockdev write read size > 128k ...passed 00:13:07.724 Test: blockdev write read invalid size ...passed 00:13:07.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:07.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:07.724 Test: blockdev write read max offset ...passed 00:13:07.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:07.724 Test: blockdev writev readv 8 blocks ...passed 00:13:07.724 Test: blockdev writev readv 30 x 1block ...passed 00:13:07.724 Test: blockdev writev readv block ...passed 00:13:07.724 Test: blockdev writev readv size > 128k ...passed 00:13:07.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:07.724 Test: blockdev comparev and writev ...passed 00:13:07.724 Test: blockdev nvme passthru rw ...passed 00:13:07.724 Test: blockdev nvme passthru vendor specific ...passed 00:13:07.724 Test: blockdev nvme admin passthru ...passed 00:13:07.724 Test: blockdev copy ...passed 00:13:07.724 Suite: bdevio tests on: nvme2n1 00:13:07.724 Test: blockdev write read block ...passed 00:13:07.724 Test: blockdev write zeroes read block ...passed 00:13:07.724 Test: blockdev write zeroes read no split ...passed 00:13:07.724 Test: blockdev write zeroes read split ...passed 00:13:07.725 Test: blockdev write zeroes read split partial ...passed 00:13:07.725 Test: blockdev reset ...passed 00:13:07.725 Test: blockdev write read 8 blocks ...passed 00:13:07.725 Test: blockdev write read size > 128k ...passed 00:13:07.725 Test: blockdev write read invalid size ...passed 00:13:07.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:07.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:07.725 Test: blockdev write read max offset ...passed 00:13:07.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:07.725 Test: blockdev writev readv 8 blocks ...passed 00:13:07.725 Test: blockdev writev readv 30 x 1block ...passed 00:13:07.725 Test: blockdev writev readv block ...passed 00:13:07.725 Test: blockdev writev readv size > 128k ...passed 00:13:07.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:07.725 Test: blockdev comparev and writev ...passed 00:13:07.725 Test: blockdev nvme passthru rw ...passed 00:13:07.725 Test: blockdev nvme passthru vendor specific ...passed 00:13:07.725 Test: blockdev nvme admin passthru ...passed 00:13:07.725 Test: blockdev copy ...passed 00:13:07.725 Suite: bdevio tests on: nvme1n1 00:13:07.725 Test: blockdev write read block ...passed 00:13:07.725 Test: blockdev write zeroes read block ...passed 00:13:07.725 Test: blockdev write zeroes read no split ...passed 00:13:07.725 Test: blockdev write zeroes read split ...passed 00:13:07.725 Test: blockdev write zeroes read split partial ...passed 00:13:07.725 Test: blockdev reset ...passed 00:13:07.725 Test: blockdev write read 8 blocks ...passed 00:13:07.725 Test: blockdev write read size > 128k ...passed 00:13:07.725 Test: blockdev write read invalid size ...passed 00:13:07.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:07.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:07.725 Test: blockdev write read max offset ...passed 00:13:07.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:07.725 Test: blockdev writev readv 8 blocks ...passed 00:13:07.725 Test: blockdev writev readv 30 x 1block ...passed 00:13:07.725 Test: blockdev writev readv block ...passed 00:13:07.725 Test: blockdev writev readv size > 128k ...passed 00:13:07.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:07.725 Test: blockdev comparev and writev ...passed 00:13:07.725 Test: blockdev nvme passthru rw ...passed 00:13:07.725 Test: blockdev nvme passthru vendor specific ...passed 00:13:07.725 Test: blockdev nvme admin passthru ...passed 00:13:07.725 Test: blockdev copy ...passed 00:13:07.725 Suite: bdevio tests on: nvme0n1 00:13:07.725 Test: blockdev write read block ...passed 00:13:07.725 Test: blockdev write zeroes read block ...passed 00:13:07.725 Test: blockdev write zeroes read no split ...passed 00:13:07.725 Test: blockdev write zeroes read split ...passed 00:13:07.725 Test: blockdev write zeroes read split partial ...passed 00:13:07.725 Test: blockdev reset ...passed 00:13:07.725 Test: blockdev write read 8 blocks ...passed 00:13:07.725 Test: blockdev write read size > 128k ...passed 00:13:07.725 Test: blockdev write read invalid size ...passed 00:13:07.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:07.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:07.725 Test: blockdev write read max offset ...passed 00:13:07.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:07.725 Test: blockdev writev readv 8 blocks ...passed 00:13:07.725 Test: blockdev writev readv 30 x 1block ...passed 00:13:07.725 Test: blockdev writev readv block ...passed 00:13:07.725 Test: blockdev writev readv size > 128k ...passed 00:13:07.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:07.725 Test: blockdev comparev and writev ...passed 00:13:07.725 Test: blockdev nvme passthru rw ...passed 00:13:07.725 Test: blockdev nvme passthru vendor specific ...passed 00:13:07.725 Test: blockdev nvme admin passthru ...passed 00:13:07.725 Test: blockdev copy ...passed 00:13:07.725 00:13:07.725 Run Summary: Type Total Ran Passed Failed Inactive 00:13:07.725 suites 6 6 n/a 0 0 00:13:07.725 tests 138 138 138 0 0 00:13:07.725 asserts 780 780 780 0 n/a 00:13:07.725 00:13:07.725 Elapsed time = 0.847 seconds 00:13:07.725 0 00:13:07.984 02:23:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69466 00:13:07.984 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 69466 ']' 00:13:07.984 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 69466 00:13:07.984 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:13:07.984 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:07.984 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69466 00:13:07.984 killing process with pid 69466 00:13:07.984 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:07.984 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:07.984 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69466' 00:13:07.984 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 69466 00:13:07.984 02:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 69466 00:13:08.553 ************************************ 00:13:08.553 END TEST bdev_bounds 00:13:08.553 ************************************ 00:13:08.553 02:23:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:08.553 00:13:08.553 real 0m1.888s 00:13:08.553 user 0m4.828s 00:13:08.553 sys 0m0.251s 00:13:08.553 02:23:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:08.553 02:23:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:08.553 02:23:55 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:08.553 02:23:55 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:08.553 02:23:55 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:08.553 02:23:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.553 ************************************ 00:13:08.553 START TEST bdev_nbd 00:13:08.553 ************************************ 00:13:08.553 02:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:08.553 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:08.553 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:08.553 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:08.553 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69520 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69520 /var/tmp/spdk-nbd.sock 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 69520 ']' 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:08.554 02:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:08.554 [2024-11-04 02:23:55.538206] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:08.554 [2024-11-04 02:23:55.538342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.815 [2024-11-04 02:23:55.702124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.815 [2024-11-04 02:23:55.821461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:09.387 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.661 1+0 records in 00:13:09.661 1+0 records out 00:13:09.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449882 s, 9.1 MB/s 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:09.661 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:09.939 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.940 1+0 records in 00:13:09.940 1+0 records out 00:13:09.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524356 s, 7.8 MB/s 00:13:09.940 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.940 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:09.940 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.940 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:09.940 02:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:09.940 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:09.940 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:09.940 02:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.201 1+0 records in 00:13:10.201 1+0 records out 00:13:10.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00150854 s, 2.7 MB/s 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:10.201 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.462 1+0 records in 00:13:10.462 1+0 records out 00:13:10.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104889 s, 3.9 MB/s 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:10.462 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:10.463 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.724 1+0 records in 00:13:10.724 1+0 records out 00:13:10.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00139727 s, 2.9 MB/s 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:10.724 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.985 1+0 records in 00:13:10.985 1+0 records out 00:13:10.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000920842 s, 4.4 MB/s 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:10.985 02:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:10.985 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd0", 00:13:10.985 "bdev_name": "nvme0n1" 00:13:10.985 }, 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd1", 00:13:10.985 "bdev_name": "nvme1n1" 00:13:10.985 }, 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd2", 00:13:10.985 "bdev_name": "nvme2n1" 00:13:10.985 }, 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd3", 00:13:10.985 "bdev_name": "nvme2n2" 00:13:10.985 }, 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd4", 00:13:10.985 "bdev_name": "nvme2n3" 00:13:10.985 }, 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd5", 00:13:10.985 "bdev_name": "nvme3n1" 00:13:10.985 } 00:13:10.985 ]' 00:13:10.985 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:10.985 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd0", 00:13:10.985 "bdev_name": "nvme0n1" 00:13:10.985 }, 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd1", 00:13:10.985 "bdev_name": "nvme1n1" 00:13:10.985 }, 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd2", 00:13:10.985 "bdev_name": "nvme2n1" 00:13:10.985 }, 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd3", 00:13:10.985 "bdev_name": "nvme2n2" 00:13:10.985 }, 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd4", 00:13:10.985 "bdev_name": "nvme2n3" 00:13:10.985 }, 00:13:10.985 { 00:13:10.985 "nbd_device": "/dev/nbd5", 00:13:10.985 "bdev_name": "nvme3n1" 00:13:10.985 } 00:13:10.985 ]' 00:13:10.985 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.247 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.248 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.248 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.248 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.248 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.248 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:11.509 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:11.509 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:11.510 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:11.510 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.510 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.510 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:11.510 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.510 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.510 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.510 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:11.769 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:11.769 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:11.769 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:11.769 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.769 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.769 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:11.769 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.769 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.769 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.769 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:12.029 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:12.029 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:12.029 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:12.029 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.029 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.029 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:12.029 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.029 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.029 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.029 02:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.290 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:12.550 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:12.808 /dev/nbd0 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.808 1+0 records in 00:13:12.808 1+0 records out 00:13:12.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000830376 s, 4.9 MB/s 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:12.808 02:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:13.066 /dev/nbd1 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.066 1+0 records in 00:13:13.066 1+0 records out 00:13:13.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543156 s, 7.5 MB/s 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.066 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:13.067 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.067 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:13.067 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:13.067 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.067 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:13.067 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:13.325 /dev/nbd10 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.325 1+0 records in 00:13:13.325 1+0 records out 00:13:13.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698823 s, 5.9 MB/s 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:13.325 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:13.584 /dev/nbd11 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.584 1+0 records in 00:13:13.584 1+0 records out 00:13:13.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617353 s, 6.6 MB/s 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:13.584 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:13.842 /dev/nbd12 00:13:13.842 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:13.842 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:13.842 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:13:13.842 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:13.842 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:13.842 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:13.842 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:13:13.842 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:13.842 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:13.843 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:13.843 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.843 1+0 records in 00:13:13.843 1+0 records out 00:13:13.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00344094 s, 1.2 MB/s 00:13:13.843 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.843 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:13.843 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.843 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:13.843 02:24:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:13.843 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.843 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:13.843 02:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:14.101 /dev/nbd13 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.101 1+0 records in 00:13:14.101 1+0 records out 00:13:14.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733914 s, 5.6 MB/s 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.101 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd0", 00:13:14.360 "bdev_name": "nvme0n1" 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd1", 00:13:14.360 "bdev_name": "nvme1n1" 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd10", 00:13:14.360 "bdev_name": "nvme2n1" 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd11", 00:13:14.360 "bdev_name": "nvme2n2" 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd12", 00:13:14.360 "bdev_name": "nvme2n3" 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd13", 00:13:14.360 "bdev_name": "nvme3n1" 00:13:14.360 } 00:13:14.360 ]' 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd0", 00:13:14.360 "bdev_name": "nvme0n1" 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd1", 00:13:14.360 "bdev_name": "nvme1n1" 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd10", 00:13:14.360 "bdev_name": "nvme2n1" 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd11", 00:13:14.360 "bdev_name": "nvme2n2" 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd12", 00:13:14.360 "bdev_name": "nvme2n3" 00:13:14.360 }, 00:13:14.360 { 00:13:14.360 "nbd_device": "/dev/nbd13", 00:13:14.360 "bdev_name": "nvme3n1" 00:13:14.360 } 00:13:14.360 ]' 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:14.360 /dev/nbd1 00:13:14.360 /dev/nbd10 00:13:14.360 /dev/nbd11 00:13:14.360 /dev/nbd12 00:13:14.360 /dev/nbd13' 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:14.360 /dev/nbd1 00:13:14.360 /dev/nbd10 00:13:14.360 /dev/nbd11 00:13:14.360 /dev/nbd12 00:13:14.360 /dev/nbd13' 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:14.360 256+0 records in 00:13:14.360 256+0 records out 00:13:14.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0068317 s, 153 MB/s 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:14.360 256+0 records in 00:13:14.360 256+0 records out 00:13:14.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122313 s, 8.6 MB/s 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.360 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:14.617 256+0 records in 00:13:14.617 256+0 records out 00:13:14.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181311 s, 5.8 MB/s 00:13:14.617 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.617 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:14.877 256+0 records in 00:13:14.877 256+0 records out 00:13:14.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146997 s, 7.1 MB/s 00:13:14.877 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.877 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:14.877 256+0 records in 00:13:14.877 256+0 records out 00:13:14.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.211412 s, 5.0 MB/s 00:13:14.877 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.877 02:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:15.159 256+0 records in 00:13:15.159 256+0 records out 00:13:15.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229489 s, 4.6 MB/s 00:13:15.159 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:15.159 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:15.420 256+0 records in 00:13:15.420 256+0 records out 00:13:15.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205905 s, 5.1 MB/s 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.420 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:15.680 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:15.680 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:15.680 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:15.680 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.680 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.680 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:15.680 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.680 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.680 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.680 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:15.938 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:15.938 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:15.938 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:15.938 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.938 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.938 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:15.938 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.938 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.938 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.938 02:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:16.196 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:16.196 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:16.196 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:16.196 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.196 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.196 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:16.196 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.196 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.196 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.196 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.455 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:16.714 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:16.714 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:16.714 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:16.714 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.714 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.714 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:16.714 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.714 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.714 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:16.714 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.714 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:16.972 02:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:16.972 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:17.230 malloc_lvol_verify 00:13:17.231 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:17.489 0d7ff813-7432-4877-9717-9c50c6a95e65 00:13:17.489 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:17.748 4deade68-bfa4-4630-b0f3-b77fcc31a3e0 00:13:17.748 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:18.006 /dev/nbd0 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:18.006 mke2fs 1.47.0 (5-Feb-2023) 00:13:18.006 Discarding device blocks: 0/4096 done 00:13:18.006 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:18.006 00:13:18.006 Allocating group tables: 0/1 done 00:13:18.006 Writing inode tables: 0/1 done 00:13:18.006 Creating journal (1024 blocks): done 00:13:18.006 Writing superblocks and filesystem accounting information: 0/1 done 00:13:18.006 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.006 02:24:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69520 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 69520 ']' 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 69520 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:18.006 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69520 00:13:18.265 killing process with pid 69520 00:13:18.265 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:18.265 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:18.265 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69520' 00:13:18.265 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 69520 00:13:18.265 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 69520 00:13:18.835 02:24:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:18.835 00:13:18.835 real 0m10.217s 00:13:18.835 user 0m14.106s 00:13:18.835 sys 0m3.447s 00:13:18.835 ************************************ 00:13:18.835 END TEST bdev_nbd 00:13:18.835 ************************************ 00:13:18.835 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:18.835 02:24:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:18.836 02:24:05 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:18.836 02:24:05 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:18.836 02:24:05 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:18.836 02:24:05 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:18.836 02:24:05 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:18.836 02:24:05 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:18.836 02:24:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.836 ************************************ 00:13:18.836 START TEST bdev_fio 00:13:18.836 ************************************ 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:13:18.836 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:18.836 ************************************ 00:13:18.836 START TEST bdev_fio_rw_verify 00:13:18.836 ************************************ 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:18.836 02:24:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:19.098 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:19.098 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:19.098 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:19.098 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:19.098 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:19.098 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:19.098 fio-3.35 00:13:19.098 Starting 6 threads 00:13:31.337 00:13:31.337 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69919: Mon Nov 4 02:24:16 2024 00:13:31.337 read: IOPS=15.2k, BW=59.5MiB/s (62.4MB/s)(595MiB/10002msec) 00:13:31.337 slat (usec): min=2, max=4041, avg= 6.88, stdev=18.68 00:13:31.337 clat (usec): min=72, max=10233, avg=1285.51, stdev=781.81 00:13:31.337 lat (usec): min=76, max=10247, avg=1292.39, stdev=782.69 00:13:31.337 clat percentiles (usec): 00:13:31.337 | 50.000th=[ 1188], 99.000th=[ 3621], 99.900th=[ 4883], 99.990th=[ 7111], 00:13:31.337 | 99.999th=[10290] 00:13:31.337 write: IOPS=15.5k, BW=60.4MiB/s (63.3MB/s)(604MiB/10002msec); 0 zone resets 00:13:31.337 slat (usec): min=13, max=5564, avg=41.95, stdev=140.22 00:13:31.337 clat (usec): min=67, max=8011, avg=1510.25, stdev=857.81 00:13:31.337 lat (usec): min=82, max=8242, avg=1552.20, stdev=871.58 00:13:31.337 clat percentiles (usec): 00:13:31.337 | 50.000th=[ 1385], 99.000th=[ 4113], 99.900th=[ 5473], 99.990th=[ 7242], 00:13:31.337 | 99.999th=[ 7767] 00:13:31.337 bw ( KiB/s): min=48525, max=124134, per=100.00%, avg=62273.00, stdev=3156.44, samples=114 00:13:31.337 iops : min=12129, max=31031, avg=15567.32, stdev=789.08, samples=114 00:13:31.337 lat (usec) : 100=0.01%, 250=3.31%, 500=9.42%, 750=11.29%, 1000=11.32% 00:13:31.337 lat (msec) : 2=44.16%, 4=19.62%, 10=0.86%, 20=0.01% 00:13:31.337 cpu : usr=43.95%, sys=31.50%, ctx=5829, majf=0, minf=15183 00:13:31.337 IO depths : 1=11.2%, 2=23.6%, 4=51.3%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.337 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.337 issued rwts: total=152267,154661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.338 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:31.338 00:13:31.338 Run status group 0 (all jobs): 00:13:31.338 READ: bw=59.5MiB/s (62.4MB/s), 59.5MiB/s-59.5MiB/s (62.4MB/s-62.4MB/s), io=595MiB (624MB), run=10002-10002msec 00:13:31.338 WRITE: bw=60.4MiB/s (63.3MB/s), 60.4MiB/s-60.4MiB/s (63.3MB/s-63.3MB/s), io=604MiB (633MB), run=10002-10002msec 00:13:31.338 ----------------------------------------------------- 00:13:31.338 Suppressions used: 00:13:31.338 count bytes template 00:13:31.338 6 48 /usr/src/fio/parse.c 00:13:31.338 2292 220032 /usr/src/fio/iolog.c 00:13:31.338 1 8 libtcmalloc_minimal.so 00:13:31.338 1 904 libcrypto.so 00:13:31.338 ----------------------------------------------------- 00:13:31.338 00:13:31.338 00:13:31.338 real 0m12.124s 00:13:31.338 user 0m27.999s 00:13:31.338 sys 0m19.247s 00:13:31.338 ************************************ 00:13:31.338 END TEST bdev_fio_rw_verify 00:13:31.338 ************************************ 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:31.338 02:24:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0a16d3f0-8428-425c-94c0-e3e2fb63d664"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0a16d3f0-8428-425c-94c0-e3e2fb63d664",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cbdd7d89-9396-424a-995a-d58bb5f9ad48"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cbdd7d89-9396-424a-995a-d58bb5f9ad48",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ca1ad202-2c6a-4c71-ba9c-3c23be9dd421"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ca1ad202-2c6a-4c71-ba9c-3c23be9dd421",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "a968bb52-2c9b-4b5a-adcf-395cc6902fa7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a968bb52-2c9b-4b5a-adcf-395cc6902fa7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "1f800f44-7247-4a79-b550-6823852f032f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1f800f44-7247-4a79-b550-6823852f032f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "38ec6695-6d27-42bc-8ba4-7495b3a8b04c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "38ec6695-6d27-42bc-8ba4-7495b3a8b04c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:31.338 02:24:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:31.338 02:24:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:31.338 /home/vagrant/spdk_repo/spdk 00:13:31.338 02:24:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:31.338 02:24:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:31.338 02:24:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:31.338 00:13:31.338 real 0m12.292s 00:13:31.338 user 0m28.074s 00:13:31.338 sys 0m19.321s 00:13:31.338 02:24:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:31.338 02:24:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:31.338 ************************************ 00:13:31.338 END TEST bdev_fio 00:13:31.338 ************************************ 00:13:31.338 02:24:18 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:31.338 02:24:18 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:31.338 02:24:18 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:31.338 02:24:18 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:31.338 02:24:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.338 ************************************ 00:13:31.338 START TEST bdev_verify 00:13:31.338 ************************************ 00:13:31.338 02:24:18 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:31.338 [2024-11-04 02:24:18.182046] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:31.338 [2024-11-04 02:24:18.182208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70090 ] 00:13:31.338 [2024-11-04 02:24:18.351598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:31.598 [2024-11-04 02:24:18.476107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.598 [2024-11-04 02:24:18.476235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.859 Running I/O for 5 seconds... 00:13:34.191 25824.00 IOPS, 100.88 MiB/s [2024-11-04T02:24:22.245Z] 25152.00 IOPS, 98.25 MiB/s [2024-11-04T02:24:23.187Z] 24789.33 IOPS, 96.83 MiB/s [2024-11-04T02:24:24.131Z] 24544.00 IOPS, 95.88 MiB/s [2024-11-04T02:24:24.131Z] 24368.80 IOPS, 95.19 MiB/s 00:13:37.020 Latency(us) 00:13:37.020 [2024-11-04T02:24:24.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.020 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0x0 length 0xa0000 00:13:37.020 nvme0n1 : 5.05 1849.55 7.22 0.00 0.00 69053.63 11846.89 73803.62 00:13:37.020 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0xa0000 length 0xa0000 00:13:37.020 nvme0n1 : 5.06 1974.80 7.71 0.00 0.00 64702.88 8065.97 63721.16 00:13:37.020 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0x0 length 0xbd0bd 00:13:37.020 nvme1n1 : 5.06 2293.44 8.96 0.00 0.00 55530.41 5822.62 61704.66 00:13:37.020 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:37.020 nvme1n1 : 5.06 2535.21 9.90 0.00 0.00 50117.57 4990.82 54041.99 00:13:37.020 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0x0 length 0x80000 00:13:37.020 nvme2n1 : 5.08 1942.00 7.59 0.00 0.00 65323.83 6654.42 59284.87 00:13:37.020 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0x80000 length 0x80000 00:13:37.020 nvme2n1 : 5.04 1981.63 7.74 0.00 0.00 64176.95 9679.16 57268.38 00:13:37.020 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0x0 length 0x80000 00:13:37.020 nvme2n2 : 5.07 1842.97 7.20 0.00 0.00 68659.02 12098.95 76223.41 00:13:37.020 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0x80000 length 0x80000 00:13:37.020 nvme2n2 : 5.06 1972.97 7.71 0.00 0.00 64370.65 8418.86 66544.25 00:13:37.020 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0x0 length 0x80000 00:13:37.020 nvme2n3 : 5.08 1865.72 7.29 0.00 0.00 67726.69 6704.84 67754.14 00:13:37.020 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0x80000 length 0x80000 00:13:37.020 nvme2n3 : 5.07 1970.67 7.70 0.00 0.00 64320.55 4184.22 66544.25 00:13:37.020 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0x0 length 0x20000 00:13:37.020 nvme3n1 : 5.08 1864.21 7.28 0.00 0.00 67670.84 5671.38 66544.25 00:13:37.020 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.020 Verification LBA range: start 0x20000 length 0x20000 00:13:37.020 nvme3n1 : 5.07 1969.90 7.69 0.00 0.00 64233.48 4915.20 61301.37 00:13:37.020 [2024-11-04T02:24:24.131Z] =================================================================================================================== 00:13:37.020 [2024-11-04T02:24:24.131Z] Total : 24063.07 94.00 0.00 0.00 63306.76 4184.22 76223.41 00:13:37.963 00:13:37.963 real 0m6.757s 00:13:37.963 user 0m10.890s 00:13:37.963 sys 0m1.459s 00:13:37.963 02:24:24 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:37.963 ************************************ 00:13:37.963 END TEST bdev_verify 00:13:37.963 ************************************ 00:13:37.963 02:24:24 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:37.963 02:24:24 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:37.963 02:24:24 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:37.963 02:24:24 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:37.963 02:24:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.963 ************************************ 00:13:37.963 START TEST bdev_verify_big_io 00:13:37.963 ************************************ 00:13:37.963 02:24:24 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:37.963 [2024-11-04 02:24:25.011242] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:37.963 [2024-11-04 02:24:25.011404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70196 ] 00:13:38.224 [2024-11-04 02:24:25.178429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:38.224 [2024-11-04 02:24:25.298512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.224 [2024-11-04 02:24:25.298593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.794 Running I/O for 5 seconds... 00:13:44.955 2072.00 IOPS, 129.50 MiB/s [2024-11-04T02:24:32.328Z] 3020.00 IOPS, 188.75 MiB/s [2024-11-04T02:24:32.328Z] 3276.33 IOPS, 204.77 MiB/s 00:13:45.217 Latency(us) 00:13:45.217 [2024-11-04T02:24:32.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.217 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0x0 length 0xa000 00:13:45.217 nvme0n1 : 6.13 83.53 5.22 0.00 0.00 1456345.40 250045.05 1438968.91 00:13:45.217 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0xa000 length 0xa000 00:13:45.217 nvme0n1 : 5.74 150.60 9.41 0.00 0.00 826183.04 95581.74 890483.00 00:13:45.217 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0x0 length 0xbd0b 00:13:45.217 nvme1n1 : 5.86 124.96 7.81 0.00 0.00 950960.14 79046.50 1167952.34 00:13:45.217 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:45.217 nvme1n1 : 5.66 178.05 11.13 0.00 0.00 681241.57 7461.02 816276.09 00:13:45.217 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0x0 length 0x8000 00:13:45.217 nvme2n1 : 5.92 105.38 6.59 0.00 0.00 1064227.97 24500.38 1677721.60 00:13:45.217 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0x8000 length 0x8000 00:13:45.217 nvme2n1 : 5.66 135.58 8.47 0.00 0.00 868802.56 129055.51 864671.90 00:13:45.217 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0x0 length 0x8000 00:13:45.217 nvme2n2 : 6.01 149.16 9.32 0.00 0.00 724822.20 29037.49 896935.78 00:13:45.217 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0x8000 length 0x8000 00:13:45.217 nvme2n2 : 5.67 124.39 7.77 0.00 0.00 923034.62 58881.58 1871304.86 00:13:45.217 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0x0 length 0x8000 00:13:45.217 nvme2n3 : 6.04 111.31 6.96 0.00 0.00 938851.47 6326.74 3665176.42 00:13:45.217 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0x8000 length 0x8000 00:13:45.217 nvme2n3 : 5.79 127.04 7.94 0.00 0.00 885550.41 41741.39 1664816.05 00:13:45.217 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0x0 length 0x2000 00:13:45.217 nvme3n1 : 6.23 198.90 12.43 0.00 0.00 505531.92 2697.06 2555299.05 00:13:45.217 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:45.217 Verification LBA range: start 0x2000 length 0x2000 00:13:45.217 nvme3n1 : 5.80 184.91 11.56 0.00 0.00 593627.73 6452.78 1400252.26 00:13:45.217 [2024-11-04T02:24:32.328Z] =================================================================================================================== 00:13:45.217 [2024-11-04T02:24:32.328Z] Total : 1673.80 104.61 0.00 0.00 815918.79 2697.06 3665176.42 00:13:46.162 00:13:46.162 real 0m8.119s 00:13:46.162 user 0m14.822s 00:13:46.162 sys 0m0.508s 00:13:46.162 ************************************ 00:13:46.162 END TEST bdev_verify_big_io 00:13:46.162 ************************************ 00:13:46.162 02:24:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:46.162 02:24:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.162 02:24:33 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.162 02:24:33 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:46.162 02:24:33 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:46.162 02:24:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.162 ************************************ 00:13:46.162 START TEST bdev_write_zeroes 00:13:46.162 ************************************ 00:13:46.162 02:24:33 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.162 [2024-11-04 02:24:33.189169] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:46.162 [2024-11-04 02:24:33.189289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70306 ] 00:13:46.422 [2024-11-04 02:24:33.347002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.422 [2024-11-04 02:24:33.456349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.992 Running I/O for 1 seconds... 00:13:47.935 102176.00 IOPS, 399.12 MiB/s 00:13:47.935 Latency(us) 00:13:47.935 [2024-11-04T02:24:35.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.935 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.935 nvme0n1 : 1.01 16799.28 65.62 0.00 0.00 7610.74 5520.15 21072.34 00:13:47.935 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.935 nvme1n1 : 1.02 17579.32 68.67 0.00 0.00 7267.18 5419.32 15829.46 00:13:47.935 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.935 nvme2n1 : 1.02 16776.16 65.53 0.00 0.00 7562.10 4184.22 18450.90 00:13:47.935 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.935 nvme2n2 : 1.02 16757.30 65.46 0.00 0.00 7565.12 4285.05 18753.38 00:13:47.935 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.935 nvme2n3 : 1.02 16738.52 65.38 0.00 0.00 7569.80 4360.66 19156.68 00:13:47.935 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.935 nvme3n1 : 1.02 16735.38 65.37 0.00 0.00 7564.07 4511.90 19459.15 00:13:47.935 [2024-11-04T02:24:35.046Z] =================================================================================================================== 00:13:47.935 [2024-11-04T02:24:35.046Z] Total : 101385.96 396.04 0.00 0.00 7521.03 4184.22 21072.34 00:13:48.566 00:13:48.566 real 0m2.552s 00:13:48.566 user 0m1.937s 00:13:48.567 sys 0m0.427s 00:13:48.567 02:24:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:48.567 02:24:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:48.567 ************************************ 00:13:48.567 END TEST bdev_write_zeroes 00:13:48.567 ************************************ 00:13:48.828 02:24:35 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:48.828 02:24:35 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:48.828 02:24:35 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:48.828 02:24:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.828 ************************************ 00:13:48.828 START TEST bdev_json_nonenclosed 00:13:48.828 ************************************ 00:13:48.828 02:24:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:48.828 [2024-11-04 02:24:35.823672] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:48.828 [2024-11-04 02:24:35.823936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70354 ] 00:13:49.090 [2024-11-04 02:24:35.990382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.090 [2024-11-04 02:24:36.111362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.090 [2024-11-04 02:24:36.111472] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:49.090 [2024-11-04 02:24:36.111493] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:49.090 [2024-11-04 02:24:36.111504] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:49.352 00:13:49.352 real 0m0.558s 00:13:49.352 user 0m0.334s 00:13:49.352 sys 0m0.118s 00:13:49.352 ************************************ 00:13:49.352 END TEST bdev_json_nonenclosed 00:13:49.352 ************************************ 00:13:49.352 02:24:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:49.352 02:24:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:49.352 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:49.352 02:24:36 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:49.352 02:24:36 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:49.352 02:24:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:49.352 ************************************ 00:13:49.352 START TEST bdev_json_nonarray 00:13:49.352 ************************************ 00:13:49.352 02:24:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:49.352 [2024-11-04 02:24:36.451556] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:49.352 [2024-11-04 02:24:36.451701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70379 ] 00:13:49.613 [2024-11-04 02:24:36.617245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.876 [2024-11-04 02:24:36.739996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.876 [2024-11-04 02:24:36.740111] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:49.876 [2024-11-04 02:24:36.740130] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:49.876 [2024-11-04 02:24:36.740141] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:49.876 00:13:49.876 real 0m0.560s 00:13:49.876 user 0m0.339s 00:13:49.876 sys 0m0.115s 00:13:49.876 02:24:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:49.876 02:24:36 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:49.876 ************************************ 00:13:49.876 END TEST bdev_json_nonarray 00:13:49.876 ************************************ 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:50.137 02:24:36 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:50.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:52.985 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:52.985 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:52.985 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:53.246 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:53.246 00:13:53.246 real 0m56.093s 00:13:53.246 user 1m25.096s 00:13:53.246 sys 0m29.414s 00:13:53.246 02:24:40 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:53.246 ************************************ 00:13:53.246 END TEST blockdev_xnvme 00:13:53.246 ************************************ 00:13:53.246 02:24:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.508 02:24:40 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:53.508 02:24:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:53.508 02:24:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:53.508 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:13:53.508 ************************************ 00:13:53.508 START TEST ublk 00:13:53.508 ************************************ 00:13:53.508 02:24:40 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:53.508 * Looking for test storage... 00:13:53.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:53.508 02:24:40 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:53.508 02:24:40 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:13:53.508 02:24:40 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:53.508 02:24:40 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:53.508 02:24:40 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:53.508 02:24:40 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:53.508 02:24:40 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:53.508 02:24:40 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.508 02:24:40 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:53.508 02:24:40 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:53.508 02:24:40 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:53.508 02:24:40 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:53.508 02:24:40 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:53.508 02:24:40 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:53.508 02:24:40 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:53.508 02:24:40 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:53.509 02:24:40 ublk -- scripts/common.sh@345 -- # : 1 00:13:53.509 02:24:40 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:53.509 02:24:40 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.509 02:24:40 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:53.509 02:24:40 ublk -- scripts/common.sh@353 -- # local d=1 00:13:53.509 02:24:40 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.509 02:24:40 ublk -- scripts/common.sh@355 -- # echo 1 00:13:53.509 02:24:40 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:53.509 02:24:40 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:53.509 02:24:40 ublk -- scripts/common.sh@353 -- # local d=2 00:13:53.509 02:24:40 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.509 02:24:40 ublk -- scripts/common.sh@355 -- # echo 2 00:13:53.509 02:24:40 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:53.509 02:24:40 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:53.509 02:24:40 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:53.509 02:24:40 ublk -- scripts/common.sh@368 -- # return 0 00:13:53.509 02:24:40 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.509 02:24:40 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:53.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.509 --rc genhtml_branch_coverage=1 00:13:53.509 --rc genhtml_function_coverage=1 00:13:53.509 --rc genhtml_legend=1 00:13:53.509 --rc geninfo_all_blocks=1 00:13:53.509 --rc geninfo_unexecuted_blocks=1 00:13:53.509 00:13:53.509 ' 00:13:53.509 02:24:40 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:53.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.509 --rc genhtml_branch_coverage=1 00:13:53.509 --rc genhtml_function_coverage=1 00:13:53.509 --rc genhtml_legend=1 00:13:53.509 --rc geninfo_all_blocks=1 00:13:53.509 --rc geninfo_unexecuted_blocks=1 00:13:53.509 00:13:53.509 ' 00:13:53.509 02:24:40 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:53.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.509 --rc genhtml_branch_coverage=1 00:13:53.509 --rc genhtml_function_coverage=1 00:13:53.509 --rc genhtml_legend=1 00:13:53.509 --rc geninfo_all_blocks=1 00:13:53.509 --rc geninfo_unexecuted_blocks=1 00:13:53.509 00:13:53.509 ' 00:13:53.509 02:24:40 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:53.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.509 --rc genhtml_branch_coverage=1 00:13:53.509 --rc genhtml_function_coverage=1 00:13:53.509 --rc genhtml_legend=1 00:13:53.509 --rc geninfo_all_blocks=1 00:13:53.509 --rc geninfo_unexecuted_blocks=1 00:13:53.509 00:13:53.509 ' 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:53.509 02:24:40 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:53.509 02:24:40 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:53.509 02:24:40 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:53.509 02:24:40 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:53.509 02:24:40 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:53.509 02:24:40 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:53.509 02:24:40 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:53.509 02:24:40 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:53.509 02:24:40 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:53.509 02:24:40 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:53.509 02:24:40 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:53.509 02:24:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.509 ************************************ 00:13:53.509 START TEST test_save_ublk_config 00:13:53.509 ************************************ 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70667 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70667 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70667 ']' 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:53.509 02:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:53.770 [2024-11-04 02:24:40.694329] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:53.770 [2024-11-04 02:24:40.694484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70667 ] 00:13:53.771 [2024-11-04 02:24:40.861942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.032 [2024-11-04 02:24:40.986302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.605 02:24:41 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:54.605 02:24:41 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:13:54.605 02:24:41 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:54.605 02:24:41 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:54.605 02:24:41 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.605 02:24:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:54.605 [2024-11-04 02:24:41.708892] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:54.605 [2024-11-04 02:24:41.709808] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:54.867 malloc0 00:13:54.867 [2024-11-04 02:24:41.781040] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:54.867 [2024-11-04 02:24:41.781143] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:54.867 [2024-11-04 02:24:41.781154] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:54.867 [2024-11-04 02:24:41.781162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:54.867 [2024-11-04 02:24:41.790009] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:54.867 [2024-11-04 02:24:41.790044] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:54.867 [2024-11-04 02:24:41.796911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:54.867 [2024-11-04 02:24:41.797036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:54.867 [2024-11-04 02:24:41.813909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:54.867 0 00:13:54.867 02:24:41 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.867 02:24:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:54.867 02:24:41 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.867 02:24:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:55.129 02:24:42 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.129 02:24:42 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:55.129 "subsystems": [ 00:13:55.129 { 00:13:55.129 "subsystem": "fsdev", 00:13:55.129 "config": [ 00:13:55.129 { 00:13:55.129 "method": "fsdev_set_opts", 00:13:55.129 "params": { 00:13:55.129 "fsdev_io_pool_size": 65535, 00:13:55.129 "fsdev_io_cache_size": 256 00:13:55.129 } 00:13:55.129 } 00:13:55.129 ] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "keyring", 00:13:55.129 "config": [] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "iobuf", 00:13:55.129 "config": [ 00:13:55.129 { 00:13:55.129 "method": "iobuf_set_options", 00:13:55.129 "params": { 00:13:55.129 "small_pool_count": 8192, 00:13:55.129 "large_pool_count": 1024, 00:13:55.129 "small_bufsize": 8192, 00:13:55.129 "large_bufsize": 135168, 00:13:55.129 "enable_numa": false 00:13:55.129 } 00:13:55.129 } 00:13:55.129 ] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "sock", 00:13:55.129 "config": [ 00:13:55.129 { 00:13:55.129 "method": "sock_set_default_impl", 00:13:55.129 "params": { 00:13:55.129 "impl_name": "posix" 00:13:55.129 } 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "method": "sock_impl_set_options", 00:13:55.129 "params": { 00:13:55.129 "impl_name": "ssl", 00:13:55.129 "recv_buf_size": 4096, 00:13:55.129 "send_buf_size": 4096, 00:13:55.129 "enable_recv_pipe": true, 00:13:55.129 "enable_quickack": false, 00:13:55.129 "enable_placement_id": 0, 00:13:55.129 "enable_zerocopy_send_server": true, 00:13:55.129 "enable_zerocopy_send_client": false, 00:13:55.129 "zerocopy_threshold": 0, 00:13:55.129 "tls_version": 0, 00:13:55.129 "enable_ktls": false 00:13:55.129 } 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "method": "sock_impl_set_options", 00:13:55.129 "params": { 00:13:55.129 "impl_name": "posix", 00:13:55.129 "recv_buf_size": 2097152, 00:13:55.129 "send_buf_size": 2097152, 00:13:55.129 "enable_recv_pipe": true, 00:13:55.129 "enable_quickack": false, 00:13:55.129 "enable_placement_id": 0, 00:13:55.129 "enable_zerocopy_send_server": true, 00:13:55.129 "enable_zerocopy_send_client": false, 00:13:55.129 "zerocopy_threshold": 0, 00:13:55.129 "tls_version": 0, 00:13:55.129 "enable_ktls": false 00:13:55.129 } 00:13:55.129 } 00:13:55.129 ] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "vmd", 00:13:55.129 "config": [] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "accel", 00:13:55.129 "config": [ 00:13:55.129 { 00:13:55.129 "method": "accel_set_options", 00:13:55.129 "params": { 00:13:55.129 "small_cache_size": 128, 00:13:55.129 "large_cache_size": 16, 00:13:55.129 "task_count": 2048, 00:13:55.129 "sequence_count": 2048, 00:13:55.129 "buf_count": 2048 00:13:55.129 } 00:13:55.129 } 00:13:55.129 ] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "bdev", 00:13:55.129 "config": [ 00:13:55.129 { 00:13:55.129 "method": "bdev_set_options", 00:13:55.129 "params": { 00:13:55.129 "bdev_io_pool_size": 65535, 00:13:55.129 "bdev_io_cache_size": 256, 00:13:55.129 "bdev_auto_examine": true, 00:13:55.129 "iobuf_small_cache_size": 128, 00:13:55.129 "iobuf_large_cache_size": 16 00:13:55.129 } 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "method": "bdev_raid_set_options", 00:13:55.129 "params": { 00:13:55.129 "process_window_size_kb": 1024, 00:13:55.129 "process_max_bandwidth_mb_sec": 0 00:13:55.129 } 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "method": "bdev_iscsi_set_options", 00:13:55.129 "params": { 00:13:55.129 "timeout_sec": 30 00:13:55.129 } 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "method": "bdev_nvme_set_options", 00:13:55.129 "params": { 00:13:55.129 "action_on_timeout": "none", 00:13:55.129 "timeout_us": 0, 00:13:55.129 "timeout_admin_us": 0, 00:13:55.129 "keep_alive_timeout_ms": 10000, 00:13:55.129 "arbitration_burst": 0, 00:13:55.129 "low_priority_weight": 0, 00:13:55.129 "medium_priority_weight": 0, 00:13:55.129 "high_priority_weight": 0, 00:13:55.129 "nvme_adminq_poll_period_us": 10000, 00:13:55.129 "nvme_ioq_poll_period_us": 0, 00:13:55.129 "io_queue_requests": 0, 00:13:55.129 "delay_cmd_submit": true, 00:13:55.129 "transport_retry_count": 4, 00:13:55.129 "bdev_retry_count": 3, 00:13:55.129 "transport_ack_timeout": 0, 00:13:55.129 "ctrlr_loss_timeout_sec": 0, 00:13:55.129 "reconnect_delay_sec": 0, 00:13:55.129 "fast_io_fail_timeout_sec": 0, 00:13:55.129 "disable_auto_failback": false, 00:13:55.129 "generate_uuids": false, 00:13:55.129 "transport_tos": 0, 00:13:55.129 "nvme_error_stat": false, 00:13:55.129 "rdma_srq_size": 0, 00:13:55.129 "io_path_stat": false, 00:13:55.129 "allow_accel_sequence": false, 00:13:55.129 "rdma_max_cq_size": 0, 00:13:55.129 "rdma_cm_event_timeout_ms": 0, 00:13:55.129 "dhchap_digests": [ 00:13:55.129 "sha256", 00:13:55.129 "sha384", 00:13:55.129 "sha512" 00:13:55.129 ], 00:13:55.129 "dhchap_dhgroups": [ 00:13:55.129 "null", 00:13:55.129 "ffdhe2048", 00:13:55.129 "ffdhe3072", 00:13:55.129 "ffdhe4096", 00:13:55.129 "ffdhe6144", 00:13:55.129 "ffdhe8192" 00:13:55.129 ] 00:13:55.129 } 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "method": "bdev_nvme_set_hotplug", 00:13:55.129 "params": { 00:13:55.129 "period_us": 100000, 00:13:55.129 "enable": false 00:13:55.129 } 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "method": "bdev_malloc_create", 00:13:55.129 "params": { 00:13:55.129 "name": "malloc0", 00:13:55.129 "num_blocks": 8192, 00:13:55.129 "block_size": 4096, 00:13:55.129 "physical_block_size": 4096, 00:13:55.129 "uuid": "49a910f1-7c06-479b-b348-3af528b8549c", 00:13:55.129 "optimal_io_boundary": 0, 00:13:55.129 "md_size": 0, 00:13:55.129 "dif_type": 0, 00:13:55.129 "dif_is_head_of_md": false, 00:13:55.129 "dif_pi_format": 0 00:13:55.129 } 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "method": "bdev_wait_for_examine" 00:13:55.129 } 00:13:55.129 ] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "scsi", 00:13:55.129 "config": null 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "scheduler", 00:13:55.129 "config": [ 00:13:55.129 { 00:13:55.129 "method": "framework_set_scheduler", 00:13:55.129 "params": { 00:13:55.129 "name": "static" 00:13:55.129 } 00:13:55.129 } 00:13:55.129 ] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "vhost_scsi", 00:13:55.129 "config": [] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "vhost_blk", 00:13:55.129 "config": [] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "ublk", 00:13:55.129 "config": [ 00:13:55.129 { 00:13:55.129 "method": "ublk_create_target", 00:13:55.129 "params": { 00:13:55.129 "cpumask": "1" 00:13:55.129 } 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "method": "ublk_start_disk", 00:13:55.129 "params": { 00:13:55.129 "bdev_name": "malloc0", 00:13:55.129 "ublk_id": 0, 00:13:55.129 "num_queues": 1, 00:13:55.129 "queue_depth": 128 00:13:55.129 } 00:13:55.129 } 00:13:55.129 ] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "nbd", 00:13:55.129 "config": [] 00:13:55.129 }, 00:13:55.129 { 00:13:55.129 "subsystem": "nvmf", 00:13:55.129 "config": [ 00:13:55.130 { 00:13:55.130 "method": "nvmf_set_config", 00:13:55.130 "params": { 00:13:55.130 "discovery_filter": "match_any", 00:13:55.130 "admin_cmd_passthru": { 00:13:55.130 "identify_ctrlr": false 00:13:55.130 }, 00:13:55.130 "dhchap_digests": [ 00:13:55.130 "sha256", 00:13:55.130 "sha384", 00:13:55.130 "sha512" 00:13:55.130 ], 00:13:55.130 "dhchap_dhgroups": [ 00:13:55.130 "null", 00:13:55.130 "ffdhe2048", 00:13:55.130 "ffdhe3072", 00:13:55.130 "ffdhe4096", 00:13:55.130 "ffdhe6144", 00:13:55.130 "ffdhe8192" 00:13:55.130 ] 00:13:55.130 } 00:13:55.130 }, 00:13:55.130 { 00:13:55.130 "method": "nvmf_set_max_subsystems", 00:13:55.130 "params": { 00:13:55.130 "max_subsystems": 1024 00:13:55.130 } 00:13:55.130 }, 00:13:55.130 { 00:13:55.130 "method": "nvmf_set_crdt", 00:13:55.130 "params": { 00:13:55.130 "crdt1": 0, 00:13:55.130 "crdt2": 0, 00:13:55.130 "crdt3": 0 00:13:55.130 } 00:13:55.130 } 00:13:55.130 ] 00:13:55.130 }, 00:13:55.130 { 00:13:55.130 "subsystem": "iscsi", 00:13:55.130 "config": [ 00:13:55.130 { 00:13:55.130 "method": "iscsi_set_options", 00:13:55.130 "params": { 00:13:55.130 "node_base": "iqn.2016-06.io.spdk", 00:13:55.130 "max_sessions": 128, 00:13:55.130 "max_connections_per_session": 2, 00:13:55.130 "max_queue_depth": 64, 00:13:55.130 "default_time2wait": 2, 00:13:55.130 "default_time2retain": 20, 00:13:55.130 "first_burst_length": 8192, 00:13:55.130 "immediate_data": true, 00:13:55.130 "allow_duplicated_isid": false, 00:13:55.130 "error_recovery_level": 0, 00:13:55.130 "nop_timeout": 60, 00:13:55.130 "nop_in_interval": 30, 00:13:55.130 "disable_chap": false, 00:13:55.130 "require_chap": false, 00:13:55.130 "mutual_chap": false, 00:13:55.130 "chap_group": 0, 00:13:55.130 "max_large_datain_per_connection": 64, 00:13:55.130 "max_r2t_per_connection": 4, 00:13:55.130 "pdu_pool_size": 36864, 00:13:55.130 "immediate_data_pool_size": 16384, 00:13:55.130 "data_out_pool_size": 2048 00:13:55.130 } 00:13:55.130 } 00:13:55.130 ] 00:13:55.130 } 00:13:55.130 ] 00:13:55.130 }' 00:13:55.130 02:24:42 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70667 00:13:55.130 02:24:42 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70667 ']' 00:13:55.130 02:24:42 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70667 00:13:55.130 02:24:42 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:13:55.130 02:24:42 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:55.130 02:24:42 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70667 00:13:55.130 02:24:42 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:55.130 killing process with pid 70667 00:13:55.130 02:24:42 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:55.130 02:24:42 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70667' 00:13:55.130 02:24:42 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70667 00:13:55.130 02:24:42 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70667 00:13:56.515 [2024-11-04 02:24:43.604725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:56.773 [2024-11-04 02:24:43.643956] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:56.773 [2024-11-04 02:24:43.644050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:56.773 [2024-11-04 02:24:43.650882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:56.773 [2024-11-04 02:24:43.650928] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:56.773 [2024-11-04 02:24:43.650938] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:56.773 [2024-11-04 02:24:43.650956] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:56.773 [2024-11-04 02:24:43.651068] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:57.709 02:24:44 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70727 00:13:57.709 02:24:44 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70727 00:13:57.709 02:24:44 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70727 ']' 00:13:57.709 02:24:44 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.709 02:24:44 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:57.709 02:24:44 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.709 02:24:44 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:57.709 02:24:44 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:57.709 02:24:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.709 02:24:44 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:57.709 "subsystems": [ 00:13:57.709 { 00:13:57.709 "subsystem": "fsdev", 00:13:57.709 "config": [ 00:13:57.709 { 00:13:57.709 "method": "fsdev_set_opts", 00:13:57.709 "params": { 00:13:57.709 "fsdev_io_pool_size": 65535, 00:13:57.709 "fsdev_io_cache_size": 256 00:13:57.709 } 00:13:57.709 } 00:13:57.709 ] 00:13:57.709 }, 00:13:57.709 { 00:13:57.709 "subsystem": "keyring", 00:13:57.709 "config": [] 00:13:57.709 }, 00:13:57.709 { 00:13:57.709 "subsystem": "iobuf", 00:13:57.709 "config": [ 00:13:57.709 { 00:13:57.709 "method": "iobuf_set_options", 00:13:57.709 "params": { 00:13:57.709 "small_pool_count": 8192, 00:13:57.709 "large_pool_count": 1024, 00:13:57.709 "small_bufsize": 8192, 00:13:57.709 "large_bufsize": 135168, 00:13:57.709 "enable_numa": false 00:13:57.709 } 00:13:57.709 } 00:13:57.709 ] 00:13:57.709 }, 00:13:57.709 { 00:13:57.709 "subsystem": "sock", 00:13:57.709 "config": [ 00:13:57.709 { 00:13:57.709 "method": "sock_set_default_impl", 00:13:57.709 "params": { 00:13:57.709 "impl_name": "posix" 00:13:57.709 } 00:13:57.709 }, 00:13:57.709 { 00:13:57.709 "method": "sock_impl_set_options", 00:13:57.709 "params": { 00:13:57.709 "impl_name": "ssl", 00:13:57.709 "recv_buf_size": 4096, 00:13:57.709 "send_buf_size": 4096, 00:13:57.709 "enable_recv_pipe": true, 00:13:57.709 "enable_quickack": false, 00:13:57.709 "enable_placement_id": 0, 00:13:57.709 "enable_zerocopy_send_server": true, 00:13:57.709 "enable_zerocopy_send_client": false, 00:13:57.709 "zerocopy_threshold": 0, 00:13:57.709 "tls_version": 0, 00:13:57.709 "enable_ktls": false 00:13:57.709 } 00:13:57.709 }, 00:13:57.709 { 00:13:57.709 "method": "sock_impl_set_options", 00:13:57.709 "params": { 00:13:57.709 "impl_name": "posix", 00:13:57.709 "recv_buf_size": 2097152, 00:13:57.709 "send_buf_size": 2097152, 00:13:57.709 "enable_recv_pipe": true, 00:13:57.709 "enable_quickack": false, 00:13:57.709 "enable_placement_id": 0, 00:13:57.709 "enable_zerocopy_send_server": true, 00:13:57.709 "enable_zerocopy_send_client": false, 00:13:57.709 "zerocopy_threshold": 0, 00:13:57.709 "tls_version": 0, 00:13:57.709 "enable_ktls": false 00:13:57.709 } 00:13:57.709 } 00:13:57.709 ] 00:13:57.709 }, 00:13:57.709 { 00:13:57.709 "subsystem": "vmd", 00:13:57.709 "config": [] 00:13:57.709 }, 00:13:57.709 { 00:13:57.709 "subsystem": "accel", 00:13:57.709 "config": [ 00:13:57.709 { 00:13:57.709 "method": "accel_set_options", 00:13:57.709 "params": { 00:13:57.709 "small_cache_size": 128, 00:13:57.709 "large_cache_size": 16, 00:13:57.709 "task_count": 2048, 00:13:57.709 "sequence_count": 2048, 00:13:57.709 "buf_count": 2048 00:13:57.709 } 00:13:57.709 } 00:13:57.709 ] 00:13:57.709 }, 00:13:57.709 { 00:13:57.709 "subsystem": "bdev", 00:13:57.709 "config": [ 00:13:57.709 { 00:13:57.709 "method": "bdev_set_options", 00:13:57.709 "params": { 00:13:57.709 "bdev_io_pool_size": 65535, 00:13:57.709 "bdev_io_cache_size": 256, 00:13:57.709 "bdev_auto_examine": true, 00:13:57.709 "iobuf_small_cache_size": 128, 00:13:57.709 "iobuf_large_cache_size": 16 00:13:57.709 } 00:13:57.709 }, 00:13:57.709 { 00:13:57.709 "method": "bdev_raid_set_options", 00:13:57.709 "params": { 00:13:57.709 "process_window_size_kb": 1024, 00:13:57.709 "process_max_bandwidth_mb_sec": 0 00:13:57.709 } 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "method": "bdev_iscsi_set_options", 00:13:57.710 "params": { 00:13:57.710 "timeout_sec": 30 00:13:57.710 } 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "method": "bdev_nvme_set_options", 00:13:57.710 "params": { 00:13:57.710 "action_on_timeout": "none", 00:13:57.710 "timeout_us": 0, 00:13:57.710 "timeout_admin_us": 0, 00:13:57.710 "keep_alive_timeout_ms": 10000, 00:13:57.710 "arbitration_burst": 0, 00:13:57.710 "low_priority_weight": 0, 00:13:57.710 "medium_priority_weight": 0, 00:13:57.710 "high_priority_weight": 0, 00:13:57.710 "nvme_adminq_poll_period_us": 10000, 00:13:57.710 "nvme_ioq_poll_period_us": 0, 00:13:57.710 "io_queue_requests": 0, 00:13:57.710 "delay_cmd_submit": true, 00:13:57.710 "transport_retry_count": 4, 00:13:57.710 "bdev_retry_count": 3, 00:13:57.710 "transport_ack_timeout": 0, 00:13:57.710 "ctrlr_loss_timeout_sec": 0, 00:13:57.710 "reconnect_delay_sec": 0, 00:13:57.710 "fast_io_fail_timeout_sec": 0, 00:13:57.710 "disable_auto_failback": false, 00:13:57.710 "generate_uuids": false, 00:13:57.710 "transport_tos": 0, 00:13:57.710 "nvme_error_stat": false, 00:13:57.710 "rdma_srq_size": 0, 00:13:57.710 "io_path_stat": false, 00:13:57.710 "allow_accel_sequence": false, 00:13:57.710 "rdma_max_cq_size": 0, 00:13:57.710 "rdma_cm_event_timeout_ms": 0, 00:13:57.710 "dhchap_digests": [ 00:13:57.710 "sha256", 00:13:57.710 "sha384", 00:13:57.710 "sha512" 00:13:57.710 ], 00:13:57.710 "dhchap_dhgroups": [ 00:13:57.710 "null", 00:13:57.710 "ffdhe2048", 00:13:57.710 "ffdhe3072", 00:13:57.710 "ffdhe4096", 00:13:57.710 "ffdhe6144", 00:13:57.710 "ffdhe8192" 00:13:57.710 ] 00:13:57.710 } 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "method": "bdev_nvme_set_hotplug", 00:13:57.710 "params": { 00:13:57.710 "period_us": 100000, 00:13:57.710 "enable": false 00:13:57.710 } 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "method": "bdev_malloc_create", 00:13:57.710 "params": { 00:13:57.710 "name": "malloc0", 00:13:57.710 "num_blocks": 8192, 00:13:57.710 "block_size": 4096, 00:13:57.710 "physical_block_size": 4096, 00:13:57.710 "uuid": "49a910f1-7c06-479b-b348-3af528b8549c", 00:13:57.710 "optimal_io_boundary": 0, 00:13:57.710 "md_size": 0, 00:13:57.710 "dif_type": 0, 00:13:57.710 "dif_is_head_of_md": false, 00:13:57.710 "dif_pi_format": 0 00:13:57.710 } 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "method": "bdev_wait_for_examine" 00:13:57.710 } 00:13:57.710 ] 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "subsystem": "scsi", 00:13:57.710 "config": null 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "subsystem": "scheduler", 00:13:57.710 "config": [ 00:13:57.710 { 00:13:57.710 "method": "framework_set_scheduler", 00:13:57.710 "params": { 00:13:57.710 "name": "static" 00:13:57.710 } 00:13:57.710 } 00:13:57.710 ] 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "subsystem": "vhost_scsi", 00:13:57.710 "config": [] 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "subsystem": "vhost_blk", 00:13:57.710 "config": [] 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "subsystem": "ublk", 00:13:57.710 "config": [ 00:13:57.710 { 00:13:57.710 "method": "ublk_create_target", 00:13:57.710 "params": { 00:13:57.710 "cpumask": "1" 00:13:57.710 } 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "method": "ublk_start_disk", 00:13:57.710 "params": { 00:13:57.710 "bdev_name": "malloc0", 00:13:57.710 "ublk_id": 0, 00:13:57.710 "num_queues": 1, 00:13:57.710 "queue_depth": 128 00:13:57.710 } 00:13:57.710 } 00:13:57.710 ] 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "subsystem": "nbd", 00:13:57.710 "config": [] 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "subsystem": "nvmf", 00:13:57.710 "config": [ 00:13:57.710 { 00:13:57.710 "method": "nvmf_set_config", 00:13:57.710 "params": { 00:13:57.710 "discovery_filter": "match_any", 00:13:57.710 "admin_cmd_passthru": { 00:13:57.710 "identify_ctrlr": false 00:13:57.710 }, 00:13:57.710 "dhchap_digests": [ 00:13:57.710 "sha256", 00:13:57.710 "sha384", 00:13:57.710 "sha512" 00:13:57.710 ], 00:13:57.710 "dhchap_dhgroups": [ 00:13:57.710 "null", 00:13:57.710 "ffdhe2048", 00:13:57.710 "ffdhe3072", 00:13:57.710 "ffdhe4096", 00:13:57.710 "ffdhe6144", 00:13:57.710 "ffdhe8192" 00:13:57.710 ] 00:13:57.710 } 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "method": "nvmf_set_max_subsystems", 00:13:57.710 "params": { 00:13:57.710 "max_subsystems": 1024 00:13:57.710 } 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "method": "nvmf_set_crdt", 00:13:57.710 "params": { 00:13:57.710 "crdt1": 0, 00:13:57.710 "crdt2": 0, 00:13:57.710 "crdt3": 0 00:13:57.710 } 00:13:57.710 } 00:13:57.710 ] 00:13:57.710 }, 00:13:57.710 { 00:13:57.710 "subsystem": "iscsi", 00:13:57.710 "config": [ 00:13:57.710 { 00:13:57.710 "method": "iscsi_set_options", 00:13:57.710 "params": { 00:13:57.710 "node_base": "iqn.2016-06.io.spdk", 00:13:57.710 "max_sessions": 128, 00:13:57.710 "max_connections_per_session": 2, 00:13:57.710 "max_queue_depth": 64, 00:13:57.711 "default_time2wait": 2, 00:13:57.711 "default_time2retain": 20, 00:13:57.711 "first_burst_length": 8192, 00:13:57.711 "immediate_data": true, 00:13:57.711 "allow_duplicated_isid": false, 00:13:57.711 "error_recovery_level": 0, 00:13:57.711 "nop_timeout": 60, 00:13:57.711 "nop_in_interval": 30, 00:13:57.711 "disable_chap": false, 00:13:57.711 "require_chap": false, 00:13:57.711 "mutual_chap": false, 00:13:57.711 "chap_group": 0, 00:13:57.711 "max_large_datain_per_connection": 64, 00:13:57.711 "max_r2t_per_connection": 4, 00:13:57.711 "pdu_pool_size": 36864, 00:13:57.711 "immediate_data_pool_size": 16384, 00:13:57.711 "data_out_pool_size": 2048 00:13:57.711 } 00:13:57.711 } 00:13:57.711 ] 00:13:57.711 } 00:13:57.711 ] 00:13:57.711 }' 00:13:57.970 [2024-11-04 02:24:44.892880] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:13:57.970 [2024-11-04 02:24:44.892999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70727 ] 00:13:57.970 [2024-11-04 02:24:45.050021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.229 [2024-11-04 02:24:45.139283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.798 [2024-11-04 02:24:45.768881] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:58.798 [2024-11-04 02:24:45.769515] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:58.798 [2024-11-04 02:24:45.776974] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:58.798 [2024-11-04 02:24:45.777032] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:58.798 [2024-11-04 02:24:45.777039] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:58.798 [2024-11-04 02:24:45.777045] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:58.798 [2024-11-04 02:24:45.785933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:58.798 [2024-11-04 02:24:45.785950] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:58.798 [2024-11-04 02:24:45.792886] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:58.798 [2024-11-04 02:24:45.792961] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:58.798 [2024-11-04 02:24:45.809882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70727 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70727 ']' 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70727 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:58.798 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70727 00:13:59.057 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:59.057 killing process with pid 70727 00:13:59.057 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:59.057 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70727' 00:13:59.057 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70727 00:13:59.057 02:24:45 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70727 00:13:59.993 [2024-11-04 02:24:46.904006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:59.993 [2024-11-04 02:24:46.932933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:59.993 [2024-11-04 02:24:46.933047] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:59.993 [2024-11-04 02:24:46.940886] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:59.993 [2024-11-04 02:24:46.940928] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:59.993 [2024-11-04 02:24:46.940934] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:59.993 [2024-11-04 02:24:46.940952] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:59.993 [2024-11-04 02:24:46.941061] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:01.368 02:24:48 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:01.368 00:14:01.368 real 0m7.497s 00:14:01.368 user 0m4.877s 00:14:01.368 sys 0m3.279s 00:14:01.368 02:24:48 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:01.368 ************************************ 00:14:01.368 END TEST test_save_ublk_config 00:14:01.368 ************************************ 00:14:01.368 02:24:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:01.368 02:24:48 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70800 00:14:01.368 02:24:48 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:01.368 02:24:48 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70800 00:14:01.368 02:24:48 ublk -- common/autotest_common.sh@833 -- # '[' -z 70800 ']' 00:14:01.368 02:24:48 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:01.368 02:24:48 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.368 02:24:48 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:01.368 02:24:48 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.368 02:24:48 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:01.368 02:24:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:01.368 [2024-11-04 02:24:48.217932] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:14:01.368 [2024-11-04 02:24:48.218040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70800 ] 00:14:01.368 [2024-11-04 02:24:48.371335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:01.368 [2024-11-04 02:24:48.457308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.369 [2024-11-04 02:24:48.457377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.304 02:24:49 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:02.304 02:24:49 ublk -- common/autotest_common.sh@866 -- # return 0 00:14:02.304 02:24:49 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:02.304 02:24:49 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:02.304 02:24:49 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:02.304 02:24:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.304 ************************************ 00:14:02.304 START TEST test_create_ublk 00:14:02.304 ************************************ 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:14:02.304 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.304 [2024-11-04 02:24:49.069885] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:02.304 [2024-11-04 02:24:49.071413] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.304 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:02.304 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.304 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:02.304 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.304 [2024-11-04 02:24:49.221994] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:02.304 [2024-11-04 02:24:49.222294] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:02.304 [2024-11-04 02:24:49.222307] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:02.304 [2024-11-04 02:24:49.222313] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:02.304 [2024-11-04 02:24:49.231057] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:02.304 [2024-11-04 02:24:49.231074] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:02.304 [2024-11-04 02:24:49.237908] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:02.304 [2024-11-04 02:24:49.247928] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:02.304 [2024-11-04 02:24:49.271895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.304 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:02.304 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:02.304 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.304 02:24:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.304 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:02.304 { 00:14:02.304 "ublk_device": "/dev/ublkb0", 00:14:02.304 "id": 0, 00:14:02.304 "queue_depth": 512, 00:14:02.304 "num_queues": 4, 00:14:02.305 "bdev_name": "Malloc0" 00:14:02.305 } 00:14:02.305 ]' 00:14:02.305 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:02.305 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:02.305 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:02.305 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:02.305 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:02.305 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:02.305 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:02.563 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:02.563 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:02.563 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:02.563 02:24:49 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:02.563 02:24:49 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:02.563 02:24:49 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:02.563 02:24:49 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:02.563 02:24:49 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:02.563 02:24:49 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:02.563 02:24:49 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:02.563 02:24:49 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:02.563 02:24:49 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:02.563 02:24:49 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:02.563 02:24:49 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:02.563 02:24:49 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:02.563 fio: verification read phase will never start because write phase uses all of runtime 00:14:02.563 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:02.563 fio-3.35 00:14:02.563 Starting 1 process 00:14:14.757 00:14:14.757 fio_test: (groupid=0, jobs=1): err= 0: pid=70839: Mon Nov 4 02:24:59 2024 00:14:14.757 write: IOPS=15.7k, BW=61.5MiB/s (64.5MB/s)(615MiB/10001msec); 0 zone resets 00:14:14.757 clat (usec): min=34, max=3859, avg=62.80, stdev=92.84 00:14:14.757 lat (usec): min=34, max=3859, avg=63.21, stdev=92.85 00:14:14.757 clat percentiles (usec): 00:14:14.757 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:14:14.757 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 62], 00:14:14.757 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 70], 95.00th=[ 73], 00:14:14.757 | 99.00th=[ 83], 99.50th=[ 90], 99.90th=[ 1876], 99.95th=[ 2769], 00:14:14.757 | 99.99th=[ 3490] 00:14:14.757 bw ( KiB/s): min=56864, max=78152, per=100.00%, avg=63099.95, stdev=5610.67, samples=19 00:14:14.757 iops : min=14216, max=19538, avg=15774.95, stdev=1402.70, samples=19 00:14:14.757 lat (usec) : 50=14.98%, 100=84.62%, 250=0.21%, 500=0.02%, 750=0.01% 00:14:14.758 lat (usec) : 1000=0.01% 00:14:14.758 lat (msec) : 2=0.06%, 4=0.09% 00:14:14.758 cpu : usr=2.19%, sys=13.15%, ctx=157500, majf=0, minf=795 00:14:14.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.758 issued rwts: total=0,157468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.758 00:14:14.758 Run status group 0 (all jobs): 00:14:14.758 WRITE: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=615MiB (645MB), run=10001-10001msec 00:14:14.758 00:14:14.758 Disk stats (read/write): 00:14:14.758 ublkb0: ios=0/155857, merge=0/0, ticks=0/8309, in_queue=8310, util=99.08% 00:14:14.758 02:24:59 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 [2024-11-04 02:24:59.679861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:14.758 [2024-11-04 02:24:59.722928] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:14.758 [2024-11-04 02:24:59.723623] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:14.758 [2024-11-04 02:24:59.730899] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:14.758 [2024-11-04 02:24:59.731165] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:14.758 [2024-11-04 02:24:59.731174] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.758 02:24:59 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 [2024-11-04 02:24:59.744955] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:14.758 request: 00:14:14.758 { 00:14:14.758 "ublk_id": 0, 00:14:14.758 "method": "ublk_stop_disk", 00:14:14.758 "req_id": 1 00:14:14.758 } 00:14:14.758 Got JSON-RPC error response 00:14:14.758 response: 00:14:14.758 { 00:14:14.758 "code": -19, 00:14:14.758 "message": "No such device" 00:14:14.758 } 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:14.758 02:24:59 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 [2024-11-04 02:24:59.762945] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:14.758 [2024-11-04 02:24:59.770880] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:14.758 [2024-11-04 02:24:59.770912] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.758 02:24:59 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 02:24:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 02:25:00 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.758 02:25:00 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:14.758 02:25:00 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:14.758 02:25:00 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 02:25:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 02:25:00 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.758 02:25:00 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:14.758 02:25:00 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:14.758 02:25:00 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:14.758 02:25:00 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:14.758 02:25:00 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 02:25:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 02:25:00 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.758 02:25:00 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:14.758 02:25:00 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:14.758 ************************************ 00:14:14.758 END TEST test_create_ublk 00:14:14.758 ************************************ 00:14:14.758 02:25:00 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:14.758 00:14:14.758 real 0m11.162s 00:14:14.758 user 0m0.506s 00:14:14.758 sys 0m1.394s 00:14:14.758 02:25:00 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:14.758 02:25:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 02:25:00 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:14.758 02:25:00 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:14.758 02:25:00 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:14.758 02:25:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 ************************************ 00:14:14.758 START TEST test_create_multi_ublk 00:14:14.758 ************************************ 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 [2024-11-04 02:25:00.277882] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:14.758 [2024-11-04 02:25:00.279371] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 [2024-11-04 02:25:00.493984] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:14.758 [2024-11-04 02:25:00.494277] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:14.758 [2024-11-04 02:25:00.494289] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:14.758 [2024-11-04 02:25:00.494297] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.758 [2024-11-04 02:25:00.505916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.758 [2024-11-04 02:25:00.505936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:14.758 [2024-11-04 02:25:00.517886] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:14.758 [2024-11-04 02:25:00.518369] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:14.758 [2024-11-04 02:25:00.530914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.758 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.758 [2024-11-04 02:25:00.753987] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:14.758 [2024-11-04 02:25:00.754277] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:14.759 [2024-11-04 02:25:00.754291] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:14.759 [2024-11-04 02:25:00.754296] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.759 [2024-11-04 02:25:00.761907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.759 [2024-11-04 02:25:00.761923] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:14.759 [2024-11-04 02:25:00.769889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:14.759 [2024-11-04 02:25:00.770385] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:14.759 [2024-11-04 02:25:00.778918] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.759 [2024-11-04 02:25:00.937967] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:14.759 [2024-11-04 02:25:00.938264] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:14.759 [2024-11-04 02:25:00.938276] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:14.759 [2024-11-04 02:25:00.938282] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.759 [2024-11-04 02:25:00.945904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.759 [2024-11-04 02:25:00.945924] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:14.759 [2024-11-04 02:25:00.953893] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:14.759 [2024-11-04 02:25:00.954381] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:14.759 [2024-11-04 02:25:00.965882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.759 02:25:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.759 [2024-11-04 02:25:01.120996] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:14.759 [2024-11-04 02:25:01.121289] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:14.759 [2024-11-04 02:25:01.121302] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:14.759 [2024-11-04 02:25:01.121307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.759 [2024-11-04 02:25:01.128904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.759 [2024-11-04 02:25:01.128921] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:14.759 [2024-11-04 02:25:01.136903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:14.759 [2024-11-04 02:25:01.137386] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:14.759 [2024-11-04 02:25:01.145918] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:14.759 { 00:14:14.759 "ublk_device": "/dev/ublkb0", 00:14:14.759 "id": 0, 00:14:14.759 "queue_depth": 512, 00:14:14.759 "num_queues": 4, 00:14:14.759 "bdev_name": "Malloc0" 00:14:14.759 }, 00:14:14.759 { 00:14:14.759 "ublk_device": "/dev/ublkb1", 00:14:14.759 "id": 1, 00:14:14.759 "queue_depth": 512, 00:14:14.759 "num_queues": 4, 00:14:14.759 "bdev_name": "Malloc1" 00:14:14.759 }, 00:14:14.759 { 00:14:14.759 "ublk_device": "/dev/ublkb2", 00:14:14.759 "id": 2, 00:14:14.759 "queue_depth": 512, 00:14:14.759 "num_queues": 4, 00:14:14.759 "bdev_name": "Malloc2" 00:14:14.759 }, 00:14:14.759 { 00:14:14.759 "ublk_device": "/dev/ublkb3", 00:14:14.759 "id": 3, 00:14:14.759 "queue_depth": 512, 00:14:14.759 "num_queues": 4, 00:14:14.759 "bdev_name": "Malloc3" 00:14:14.759 } 00:14:14.759 ]' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.759 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.760 [2024-11-04 02:25:01.824959] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:15.018 [2024-11-04 02:25:01.868932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:15.018 [2024-11-04 02:25:01.869833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:15.018 [2024-11-04 02:25:01.873046] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:15.018 [2024-11-04 02:25:01.873282] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:15.018 [2024-11-04 02:25:01.873292] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.018 [2024-11-04 02:25:01.891949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:15.018 [2024-11-04 02:25:01.925415] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:15.018 [2024-11-04 02:25:01.926531] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:15.018 [2024-11-04 02:25:01.931904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:15.018 [2024-11-04 02:25:01.932146] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:15.018 [2024-11-04 02:25:01.932159] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.018 [2024-11-04 02:25:01.946963] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:15.018 [2024-11-04 02:25:01.981406] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:15.018 [2024-11-04 02:25:01.982507] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:15.018 [2024-11-04 02:25:01.995881] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:15.018 [2024-11-04 02:25:01.996123] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:15.018 [2024-11-04 02:25:01.996137] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.018 02:25:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.018 [2024-11-04 02:25:02.000030] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:15.018 [2024-11-04 02:25:02.042387] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:15.018 [2024-11-04 02:25:02.043352] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:15.018 [2024-11-04 02:25:02.057897] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:15.018 [2024-11-04 02:25:02.058135] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:15.018 [2024-11-04 02:25:02.058149] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:15.018 02:25:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.018 02:25:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:15.277 [2024-11-04 02:25:02.245932] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:15.277 [2024-11-04 02:25:02.253883] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:15.277 [2024-11-04 02:25:02.253910] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:15.277 02:25:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:15.277 02:25:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.277 02:25:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:15.277 02:25:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.277 02:25:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.536 02:25:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.536 02:25:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.536 02:25:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:15.536 02:25:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.536 02:25:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.158 02:25:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.158 02:25:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.158 02:25:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:16.158 02:25:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.158 02:25:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.158 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.158 02:25:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.158 02:25:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:16.158 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.158 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:16.417 ************************************ 00:14:16.417 END TEST test_create_multi_ublk 00:14:16.417 ************************************ 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:16.417 00:14:16.417 real 0m3.182s 00:14:16.417 user 0m0.832s 00:14:16.417 sys 0m0.135s 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:16.417 02:25:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.417 02:25:03 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:16.417 02:25:03 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:16.417 02:25:03 ublk -- ublk/ublk.sh@130 -- # killprocess 70800 00:14:16.417 02:25:03 ublk -- common/autotest_common.sh@952 -- # '[' -z 70800 ']' 00:14:16.417 02:25:03 ublk -- common/autotest_common.sh@956 -- # kill -0 70800 00:14:16.417 02:25:03 ublk -- common/autotest_common.sh@957 -- # uname 00:14:16.417 02:25:03 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:16.417 02:25:03 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70800 00:14:16.417 killing process with pid 70800 00:14:16.417 02:25:03 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:16.417 02:25:03 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:16.417 02:25:03 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70800' 00:14:16.417 02:25:03 ublk -- common/autotest_common.sh@971 -- # kill 70800 00:14:16.417 02:25:03 ublk -- common/autotest_common.sh@976 -- # wait 70800 00:14:16.984 [2024-11-04 02:25:04.019501] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:16.984 [2024-11-04 02:25:04.019694] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:17.924 00:14:17.924 real 0m24.252s 00:14:17.924 user 0m34.323s 00:14:17.924 sys 0m9.688s 00:14:17.924 02:25:04 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:17.924 02:25:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:17.924 ************************************ 00:14:17.924 END TEST ublk 00:14:17.924 ************************************ 00:14:17.924 02:25:04 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:17.924 02:25:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:17.924 02:25:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:17.924 02:25:04 -- common/autotest_common.sh@10 -- # set +x 00:14:17.924 ************************************ 00:14:17.924 START TEST ublk_recovery 00:14:17.924 ************************************ 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:17.924 * Looking for test storage... 00:14:17.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.924 02:25:04 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.924 --rc genhtml_branch_coverage=1 00:14:17.924 --rc genhtml_function_coverage=1 00:14:17.924 --rc genhtml_legend=1 00:14:17.924 --rc geninfo_all_blocks=1 00:14:17.924 --rc geninfo_unexecuted_blocks=1 00:14:17.924 00:14:17.924 ' 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.924 --rc genhtml_branch_coverage=1 00:14:17.924 --rc genhtml_function_coverage=1 00:14:17.924 --rc genhtml_legend=1 00:14:17.924 --rc geninfo_all_blocks=1 00:14:17.924 --rc geninfo_unexecuted_blocks=1 00:14:17.924 00:14:17.924 ' 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.924 --rc genhtml_branch_coverage=1 00:14:17.924 --rc genhtml_function_coverage=1 00:14:17.924 --rc genhtml_legend=1 00:14:17.924 --rc geninfo_all_blocks=1 00:14:17.924 --rc geninfo_unexecuted_blocks=1 00:14:17.924 00:14:17.924 ' 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:17.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.924 --rc genhtml_branch_coverage=1 00:14:17.924 --rc genhtml_function_coverage=1 00:14:17.924 --rc genhtml_legend=1 00:14:17.924 --rc geninfo_all_blocks=1 00:14:17.924 --rc geninfo_unexecuted_blocks=1 00:14:17.924 00:14:17.924 ' 00:14:17.924 02:25:04 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:17.924 02:25:04 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:17.924 02:25:04 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:17.924 02:25:04 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:17.924 02:25:04 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:17.924 02:25:04 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:17.924 02:25:04 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:17.924 02:25:04 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:17.924 02:25:04 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:17.924 02:25:04 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:17.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.924 02:25:04 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71185 00:14:17.924 02:25:04 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:17.924 02:25:04 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71185 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71185 ']' 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.924 02:25:04 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:17.924 02:25:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.924 [2024-11-04 02:25:04.929651] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:14:17.925 [2024-11-04 02:25:04.929745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71185 ] 00:14:18.184 [2024-11-04 02:25:05.078729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:18.184 [2024-11-04 02:25:05.159103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.184 [2024-11-04 02:25:05.159164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.752 02:25:05 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:18.752 02:25:05 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:18.752 02:25:05 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:18.752 02:25:05 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.752 02:25:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.752 [2024-11-04 02:25:05.723882] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:18.752 [2024-11-04 02:25:05.725401] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:18.752 02:25:05 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.752 02:25:05 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:18.752 02:25:05 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.752 02:25:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.752 malloc0 00:14:18.752 02:25:05 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.752 02:25:05 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:18.752 02:25:05 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.752 02:25:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.752 [2024-11-04 02:25:05.804202] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:18.752 [2024-11-04 02:25:05.804280] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:18.752 [2024-11-04 02:25:05.804289] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:18.752 [2024-11-04 02:25:05.804296] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:18.752 [2024-11-04 02:25:05.812955] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:18.752 [2024-11-04 02:25:05.812972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:18.752 [2024-11-04 02:25:05.819898] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:18.752 [2024-11-04 02:25:05.820006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:18.752 [2024-11-04 02:25:05.834900] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:18.752 1 00:14:18.752 02:25:05 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.752 02:25:05 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:20.125 02:25:06 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71220 00:14:20.125 02:25:06 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:20.125 02:25:06 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:20.125 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:20.125 fio-3.35 00:14:20.125 Starting 1 process 00:14:25.392 02:25:11 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71185 00:14:25.392 02:25:11 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:30.684 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71185 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:30.684 02:25:16 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71330 00:14:30.684 02:25:16 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:30.684 02:25:16 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:30.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.684 02:25:16 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71330 00:14:30.684 02:25:16 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71330 ']' 00:14:30.684 02:25:16 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.684 02:25:16 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:30.684 02:25:16 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.684 02:25:16 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:30.684 02:25:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.684 [2024-11-04 02:25:16.933887] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:14:30.684 [2024-11-04 02:25:16.934169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71330 ] 00:14:30.684 [2024-11-04 02:25:17.095335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:30.684 [2024-11-04 02:25:17.198029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.684 [2024-11-04 02:25:17.198100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.946 02:25:17 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:30.946 02:25:17 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:30.946 02:25:17 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:30.946 02:25:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.946 02:25:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.946 [2024-11-04 02:25:17.845896] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:30.946 [2024-11-04 02:25:17.848003] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:30.946 02:25:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.946 02:25:17 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:30.946 02:25:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.946 02:25:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.946 malloc0 00:14:30.946 02:25:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.946 02:25:17 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:30.946 02:25:17 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.946 02:25:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.946 [2024-11-04 02:25:17.957181] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:30.946 [2024-11-04 02:25:17.957226] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:30.946 [2024-11-04 02:25:17.957236] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:30.946 [2024-11-04 02:25:17.964930] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:30.946 [2024-11-04 02:25:17.964962] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:30.946 1 00:14:30.946 02:25:17 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.946 02:25:17 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71220 00:14:31.888 [2024-11-04 02:25:18.965011] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:31.888 [2024-11-04 02:25:18.968903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:31.888 [2024-11-04 02:25:18.968916] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:33.263 [2024-11-04 02:25:19.972906] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:33.263 [2024-11-04 02:25:19.980891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:33.263 [2024-11-04 02:25:19.980905] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:34.197 [2024-11-04 02:25:20.980933] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:34.197 [2024-11-04 02:25:20.988898] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:34.197 [2024-11-04 02:25:20.988914] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:34.197 [2024-11-04 02:25:20.988922] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:34.197 [2024-11-04 02:25:20.988989] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:56.148 [2024-11-04 02:25:42.032907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:56.148 [2024-11-04 02:25:42.036357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:56.149 [2024-11-04 02:25:42.041028] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:56.149 [2024-11-04 02:25:42.041045] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:22.704 00:15:22.704 fio_test: (groupid=0, jobs=1): err= 0: pid=71223: Mon Nov 4 02:26:07 2024 00:15:22.704 read: IOPS=14.7k, BW=57.6MiB/s (60.3MB/s)(3453MiB/60001msec) 00:15:22.704 slat (nsec): min=1108, max=390554, avg=4937.79, stdev=1466.86 00:15:22.704 clat (usec): min=955, max=30201k, avg=4164.21, stdev=248776.56 00:15:22.704 lat (usec): min=962, max=30201k, avg=4169.14, stdev=248776.56 00:15:22.704 clat percentiles (usec): 00:15:22.704 | 1.00th=[ 1778], 5.00th=[ 1893], 10.00th=[ 1909], 20.00th=[ 1942], 00:15:22.704 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 1991], 60.00th=[ 1991], 00:15:22.704 | 70.00th=[ 2008], 80.00th=[ 2040], 90.00th=[ 2073], 95.00th=[ 3032], 00:15:22.704 | 99.00th=[ 5211], 99.50th=[ 5735], 99.90th=[ 8160], 99.95th=[12780], 00:15:22.704 | 99.99th=[13435] 00:15:22.704 bw ( KiB/s): min= 1680, max=123488, per=100.00%, avg=115963.03, stdev=19917.69, samples=60 00:15:22.704 iops : min= 420, max=30872, avg=28990.75, stdev=4979.43, samples=60 00:15:22.704 write: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(3448MiB/60001msec); 0 zone resets 00:15:22.704 slat (nsec): min=1206, max=136844, avg=4974.02, stdev=1406.07 00:15:22.704 clat (usec): min=1073, max=30201k, avg=4519.45, stdev=265041.96 00:15:22.704 lat (usec): min=1077, max=30201k, avg=4524.42, stdev=265041.96 00:15:22.704 clat percentiles (usec): 00:15:22.704 | 1.00th=[ 1811], 5.00th=[ 1975], 10.00th=[ 2008], 20.00th=[ 2024], 00:15:22.704 | 30.00th=[ 2040], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2089], 00:15:22.704 | 70.00th=[ 2114], 80.00th=[ 2114], 90.00th=[ 2180], 95.00th=[ 2933], 00:15:22.704 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 8160], 99.95th=[12780], 00:15:22.704 | 99.99th=[13698] 00:15:22.704 bw ( KiB/s): min= 1672, max=123072, per=100.00%, avg=115792.22, stdev=19917.12, samples=60 00:15:22.704 iops : min= 418, max=30768, avg=28948.05, stdev=4979.28, samples=60 00:15:22.704 lat (usec) : 1000=0.01% 00:15:22.704 lat (msec) : 2=35.09%, 4=62.10%, 10=2.74%, 20=0.06%, >=2000=0.01% 00:15:22.704 cpu : usr=3.36%, sys=14.74%, ctx=58108, majf=0, minf=13 00:15:22.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:22.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:22.704 issued rwts: total=884020,882678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:22.704 00:15:22.704 Run status group 0 (all jobs): 00:15:22.704 READ: bw=57.6MiB/s (60.3MB/s), 57.6MiB/s-57.6MiB/s (60.3MB/s-60.3MB/s), io=3453MiB (3621MB), run=60001-60001msec 00:15:22.704 WRITE: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=3448MiB (3615MB), run=60001-60001msec 00:15:22.704 00:15:22.704 Disk stats (read/write): 00:15:22.704 ublkb1: ios=880746/879363, merge=0/0, ticks=3631159/3866211, in_queue=7497370, util=99.89% 00:15:22.704 02:26:07 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.704 [2024-11-04 02:26:07.098298] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:22.704 [2024-11-04 02:26:07.130994] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:22.704 [2024-11-04 02:26:07.131205] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:22.704 [2024-11-04 02:26:07.141891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:22.704 [2024-11-04 02:26:07.141989] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:22.704 [2024-11-04 02:26:07.141995] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.704 02:26:07 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.704 [2024-11-04 02:26:07.157963] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:22.704 [2024-11-04 02:26:07.165084] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:22.704 [2024-11-04 02:26:07.165117] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.704 02:26:07 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:22.704 02:26:07 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:22.704 02:26:07 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71330 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 71330 ']' 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 71330 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71330 00:15:22.704 killing process with pid 71330 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71330' 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@971 -- # kill 71330 00:15:22.704 02:26:07 ublk_recovery -- common/autotest_common.sh@976 -- # wait 71330 00:15:22.704 [2024-11-04 02:26:08.217701] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:22.704 [2024-11-04 02:26:08.217747] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:22.704 ************************************ 00:15:22.704 END TEST ublk_recovery 00:15:22.704 ************************************ 00:15:22.704 00:15:22.704 real 1m4.202s 00:15:22.704 user 1m48.335s 00:15:22.704 sys 0m20.149s 00:15:22.704 02:26:08 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:22.704 02:26:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:22.704 02:26:08 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:22.704 02:26:08 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:22.704 02:26:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:22.704 02:26:08 -- common/autotest_common.sh@10 -- # set +x 00:15:22.704 02:26:09 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:22.705 02:26:09 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:22.705 02:26:09 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:22.705 02:26:09 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:22.705 02:26:09 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:22.705 02:26:09 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:22.705 02:26:09 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:22.705 02:26:09 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:22.705 02:26:09 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:22.705 02:26:09 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:22.705 02:26:09 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:22.705 02:26:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:22.705 02:26:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:22.705 02:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:22.705 ************************************ 00:15:22.705 START TEST ftl 00:15:22.705 ************************************ 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:22.705 * Looking for test storage... 00:15:22.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:22.705 02:26:09 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:22.705 02:26:09 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:22.705 02:26:09 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:22.705 02:26:09 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:22.705 02:26:09 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:22.705 02:26:09 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:22.705 02:26:09 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:22.705 02:26:09 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:22.705 02:26:09 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:22.705 02:26:09 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:22.705 02:26:09 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:22.705 02:26:09 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:22.705 02:26:09 ftl -- scripts/common.sh@345 -- # : 1 00:15:22.705 02:26:09 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:22.705 02:26:09 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:22.705 02:26:09 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:22.705 02:26:09 ftl -- scripts/common.sh@353 -- # local d=1 00:15:22.705 02:26:09 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:22.705 02:26:09 ftl -- scripts/common.sh@355 -- # echo 1 00:15:22.705 02:26:09 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:22.705 02:26:09 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:22.705 02:26:09 ftl -- scripts/common.sh@353 -- # local d=2 00:15:22.705 02:26:09 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:22.705 02:26:09 ftl -- scripts/common.sh@355 -- # echo 2 00:15:22.705 02:26:09 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:22.705 02:26:09 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:22.705 02:26:09 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:22.705 02:26:09 ftl -- scripts/common.sh@368 -- # return 0 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:22.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.705 --rc genhtml_branch_coverage=1 00:15:22.705 --rc genhtml_function_coverage=1 00:15:22.705 --rc genhtml_legend=1 00:15:22.705 --rc geninfo_all_blocks=1 00:15:22.705 --rc geninfo_unexecuted_blocks=1 00:15:22.705 00:15:22.705 ' 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:22.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.705 --rc genhtml_branch_coverage=1 00:15:22.705 --rc genhtml_function_coverage=1 00:15:22.705 --rc genhtml_legend=1 00:15:22.705 --rc geninfo_all_blocks=1 00:15:22.705 --rc geninfo_unexecuted_blocks=1 00:15:22.705 00:15:22.705 ' 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:22.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.705 --rc genhtml_branch_coverage=1 00:15:22.705 --rc genhtml_function_coverage=1 00:15:22.705 --rc genhtml_legend=1 00:15:22.705 --rc geninfo_all_blocks=1 00:15:22.705 --rc geninfo_unexecuted_blocks=1 00:15:22.705 00:15:22.705 ' 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:22.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.705 --rc genhtml_branch_coverage=1 00:15:22.705 --rc genhtml_function_coverage=1 00:15:22.705 --rc genhtml_legend=1 00:15:22.705 --rc geninfo_all_blocks=1 00:15:22.705 --rc geninfo_unexecuted_blocks=1 00:15:22.705 00:15:22.705 ' 00:15:22.705 02:26:09 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:22.705 02:26:09 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:22.705 02:26:09 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:22.705 02:26:09 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:22.705 02:26:09 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:22.705 02:26:09 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:22.705 02:26:09 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:22.705 02:26:09 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:22.705 02:26:09 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:22.705 02:26:09 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:22.705 02:26:09 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:22.705 02:26:09 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:22.705 02:26:09 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:22.705 02:26:09 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:22.705 02:26:09 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:22.705 02:26:09 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:22.705 02:26:09 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:22.705 02:26:09 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:22.705 02:26:09 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:22.705 02:26:09 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:22.705 02:26:09 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:22.705 02:26:09 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:22.705 02:26:09 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:22.705 02:26:09 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:22.705 02:26:09 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:22.705 02:26:09 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:22.705 02:26:09 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:22.705 02:26:09 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:22.705 02:26:09 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:22.705 02:26:09 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:22.705 02:26:09 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:22.705 02:26:09 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:22.705 02:26:09 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:22.705 02:26:09 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:22.705 02:26:09 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:22.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:22.705 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:22.705 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:22.705 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:22.705 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:22.705 02:26:09 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72134 00:15:22.705 02:26:09 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:22.705 02:26:09 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72134 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@833 -- # '[' -z 72134 ']' 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.705 02:26:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:22.705 [2024-11-04 02:26:09.697146] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:15:22.705 [2024-11-04 02:26:09.697357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72134 ] 00:15:22.965 [2024-11-04 02:26:09.846431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.965 [2024-11-04 02:26:09.922167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.531 02:26:10 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:23.531 02:26:10 ftl -- common/autotest_common.sh@866 -- # return 0 00:15:23.531 02:26:10 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:23.789 02:26:10 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:24.361 02:26:11 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:24.361 02:26:11 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:24.931 02:26:11 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:24.931 02:26:11 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:24.931 02:26:11 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:24.932 02:26:12 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:24.932 02:26:12 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:24.932 02:26:12 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:24.932 02:26:12 ftl -- ftl/ftl.sh@50 -- # break 00:15:24.932 02:26:12 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:24.932 02:26:12 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:24.932 02:26:12 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:24.932 02:26:12 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:25.190 02:26:12 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:25.190 02:26:12 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:25.190 02:26:12 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:25.190 02:26:12 ftl -- ftl/ftl.sh@63 -- # break 00:15:25.190 02:26:12 ftl -- ftl/ftl.sh@66 -- # killprocess 72134 00:15:25.190 02:26:12 ftl -- common/autotest_common.sh@952 -- # '[' -z 72134 ']' 00:15:25.190 02:26:12 ftl -- common/autotest_common.sh@956 -- # kill -0 72134 00:15:25.190 02:26:12 ftl -- common/autotest_common.sh@957 -- # uname 00:15:25.190 02:26:12 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:25.190 02:26:12 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72134 00:15:25.190 killing process with pid 72134 00:15:25.190 02:26:12 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:25.190 02:26:12 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:25.190 02:26:12 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72134' 00:15:25.190 02:26:12 ftl -- common/autotest_common.sh@971 -- # kill 72134 00:15:25.190 02:26:12 ftl -- common/autotest_common.sh@976 -- # wait 72134 00:15:26.568 02:26:13 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:26.568 02:26:13 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:26.568 02:26:13 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:26.568 02:26:13 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:26.568 02:26:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:26.568 ************************************ 00:15:26.568 START TEST ftl_fio_basic 00:15:26.568 ************************************ 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:26.568 * Looking for test storage... 00:15:26.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:26.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.568 --rc genhtml_branch_coverage=1 00:15:26.568 --rc genhtml_function_coverage=1 00:15:26.568 --rc genhtml_legend=1 00:15:26.568 --rc geninfo_all_blocks=1 00:15:26.568 --rc geninfo_unexecuted_blocks=1 00:15:26.568 00:15:26.568 ' 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:26.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.568 --rc genhtml_branch_coverage=1 00:15:26.568 --rc genhtml_function_coverage=1 00:15:26.568 --rc genhtml_legend=1 00:15:26.568 --rc geninfo_all_blocks=1 00:15:26.568 --rc geninfo_unexecuted_blocks=1 00:15:26.568 00:15:26.568 ' 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:26.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.568 --rc genhtml_branch_coverage=1 00:15:26.568 --rc genhtml_function_coverage=1 00:15:26.568 --rc genhtml_legend=1 00:15:26.568 --rc geninfo_all_blocks=1 00:15:26.568 --rc geninfo_unexecuted_blocks=1 00:15:26.568 00:15:26.568 ' 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:26.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.568 --rc genhtml_branch_coverage=1 00:15:26.568 --rc genhtml_function_coverage=1 00:15:26.568 --rc genhtml_legend=1 00:15:26.568 --rc geninfo_all_blocks=1 00:15:26.568 --rc geninfo_unexecuted_blocks=1 00:15:26.568 00:15:26.568 ' 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:26.568 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72261 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72261 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 72261 ']' 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:26.569 02:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:26.830 [2024-11-04 02:26:13.682562] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:15:26.831 [2024-11-04 02:26:13.682713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72261 ] 00:15:26.831 [2024-11-04 02:26:13.845092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.091 [2024-11-04 02:26:13.969072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.091 [2024-11-04 02:26:13.969398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.091 [2024-11-04 02:26:13.969409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.663 02:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:27.663 02:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:15:27.663 02:26:14 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:27.663 02:26:14 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:27.663 02:26:14 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:27.663 02:26:14 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:27.663 02:26:14 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:27.663 02:26:14 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:27.924 02:26:14 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:27.924 02:26:14 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:27.924 02:26:14 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:27.924 02:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:15:27.924 02:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:27.924 02:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:27.924 02:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:27.924 02:26:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:28.185 { 00:15:28.185 "name": "nvme0n1", 00:15:28.185 "aliases": [ 00:15:28.185 "c514914b-e3be-4bda-a0ff-01dcbabf14ff" 00:15:28.185 ], 00:15:28.185 "product_name": "NVMe disk", 00:15:28.185 "block_size": 4096, 00:15:28.185 "num_blocks": 1310720, 00:15:28.185 "uuid": "c514914b-e3be-4bda-a0ff-01dcbabf14ff", 00:15:28.185 "numa_id": -1, 00:15:28.185 "assigned_rate_limits": { 00:15:28.185 "rw_ios_per_sec": 0, 00:15:28.185 "rw_mbytes_per_sec": 0, 00:15:28.185 "r_mbytes_per_sec": 0, 00:15:28.185 "w_mbytes_per_sec": 0 00:15:28.185 }, 00:15:28.185 "claimed": false, 00:15:28.185 "zoned": false, 00:15:28.185 "supported_io_types": { 00:15:28.185 "read": true, 00:15:28.185 "write": true, 00:15:28.185 "unmap": true, 00:15:28.185 "flush": true, 00:15:28.185 "reset": true, 00:15:28.185 "nvme_admin": true, 00:15:28.185 "nvme_io": true, 00:15:28.185 "nvme_io_md": false, 00:15:28.185 "write_zeroes": true, 00:15:28.185 "zcopy": false, 00:15:28.185 "get_zone_info": false, 00:15:28.185 "zone_management": false, 00:15:28.185 "zone_append": false, 00:15:28.185 "compare": true, 00:15:28.185 "compare_and_write": false, 00:15:28.185 "abort": true, 00:15:28.185 "seek_hole": false, 00:15:28.185 "seek_data": false, 00:15:28.185 "copy": true, 00:15:28.185 "nvme_iov_md": false 00:15:28.185 }, 00:15:28.185 "driver_specific": { 00:15:28.185 "nvme": [ 00:15:28.185 { 00:15:28.185 "pci_address": "0000:00:11.0", 00:15:28.185 "trid": { 00:15:28.185 "trtype": "PCIe", 00:15:28.185 "traddr": "0000:00:11.0" 00:15:28.185 }, 00:15:28.185 "ctrlr_data": { 00:15:28.185 "cntlid": 0, 00:15:28.185 "vendor_id": "0x1b36", 00:15:28.185 "model_number": "QEMU NVMe Ctrl", 00:15:28.185 "serial_number": "12341", 00:15:28.185 "firmware_revision": "8.0.0", 00:15:28.185 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:28.185 "oacs": { 00:15:28.185 "security": 0, 00:15:28.185 "format": 1, 00:15:28.185 "firmware": 0, 00:15:28.185 "ns_manage": 1 00:15:28.185 }, 00:15:28.185 "multi_ctrlr": false, 00:15:28.185 "ana_reporting": false 00:15:28.185 }, 00:15:28.185 "vs": { 00:15:28.185 "nvme_version": "1.4" 00:15:28.185 }, 00:15:28.185 "ns_data": { 00:15:28.185 "id": 1, 00:15:28.185 "can_share": false 00:15:28.185 } 00:15:28.185 } 00:15:28.185 ], 00:15:28.185 "mp_policy": "active_passive" 00:15:28.185 } 00:15:28.185 } 00:15:28.185 ]' 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:28.185 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:28.446 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:28.446 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:28.707 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e59ed071-52e4-403d-86e6-f83bf193c902 00:15:28.707 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e59ed071-52e4-403d-86e6-f83bf193c902 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=244ab577-b0bc-4c49-81e6-155965705c43 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 244ab577-b0bc-4c49-81e6-155965705c43 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=244ab577-b0bc-4c49-81e6-155965705c43 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 244ab577-b0bc-4c49-81e6-155965705c43 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=244ab577-b0bc-4c49-81e6-155965705c43 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:28.968 02:26:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 244ab577-b0bc-4c49-81e6-155965705c43 00:15:28.968 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:28.968 { 00:15:28.968 "name": "244ab577-b0bc-4c49-81e6-155965705c43", 00:15:28.968 "aliases": [ 00:15:28.968 "lvs/nvme0n1p0" 00:15:28.968 ], 00:15:28.968 "product_name": "Logical Volume", 00:15:28.968 "block_size": 4096, 00:15:28.968 "num_blocks": 26476544, 00:15:28.968 "uuid": "244ab577-b0bc-4c49-81e6-155965705c43", 00:15:28.968 "assigned_rate_limits": { 00:15:28.968 "rw_ios_per_sec": 0, 00:15:28.968 "rw_mbytes_per_sec": 0, 00:15:28.968 "r_mbytes_per_sec": 0, 00:15:28.968 "w_mbytes_per_sec": 0 00:15:28.968 }, 00:15:28.968 "claimed": false, 00:15:28.968 "zoned": false, 00:15:28.968 "supported_io_types": { 00:15:28.968 "read": true, 00:15:28.968 "write": true, 00:15:28.968 "unmap": true, 00:15:28.968 "flush": false, 00:15:28.968 "reset": true, 00:15:28.968 "nvme_admin": false, 00:15:28.968 "nvme_io": false, 00:15:28.968 "nvme_io_md": false, 00:15:28.968 "write_zeroes": true, 00:15:28.968 "zcopy": false, 00:15:28.968 "get_zone_info": false, 00:15:28.968 "zone_management": false, 00:15:28.968 "zone_append": false, 00:15:28.968 "compare": false, 00:15:28.968 "compare_and_write": false, 00:15:28.968 "abort": false, 00:15:28.968 "seek_hole": true, 00:15:28.968 "seek_data": true, 00:15:28.968 "copy": false, 00:15:28.968 "nvme_iov_md": false 00:15:28.968 }, 00:15:28.968 "driver_specific": { 00:15:28.968 "lvol": { 00:15:28.968 "lvol_store_uuid": "e59ed071-52e4-403d-86e6-f83bf193c902", 00:15:28.968 "base_bdev": "nvme0n1", 00:15:28.968 "thin_provision": true, 00:15:28.968 "num_allocated_clusters": 0, 00:15:28.968 "snapshot": false, 00:15:28.968 "clone": false, 00:15:28.968 "esnap_clone": false 00:15:28.968 } 00:15:28.968 } 00:15:28.968 } 00:15:28.968 ]' 00:15:28.968 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:29.229 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:29.229 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:29.229 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:29.229 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:29.229 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:29.230 02:26:16 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:29.230 02:26:16 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:29.230 02:26:16 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:29.490 02:26:16 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:29.490 02:26:16 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:29.490 02:26:16 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 244ab577-b0bc-4c49-81e6-155965705c43 00:15:29.490 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=244ab577-b0bc-4c49-81e6-155965705c43 00:15:29.490 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:29.490 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:29.490 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:29.490 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 244ab577-b0bc-4c49-81e6-155965705c43 00:15:29.490 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:29.490 { 00:15:29.490 "name": "244ab577-b0bc-4c49-81e6-155965705c43", 00:15:29.490 "aliases": [ 00:15:29.490 "lvs/nvme0n1p0" 00:15:29.490 ], 00:15:29.490 "product_name": "Logical Volume", 00:15:29.490 "block_size": 4096, 00:15:29.490 "num_blocks": 26476544, 00:15:29.490 "uuid": "244ab577-b0bc-4c49-81e6-155965705c43", 00:15:29.490 "assigned_rate_limits": { 00:15:29.490 "rw_ios_per_sec": 0, 00:15:29.490 "rw_mbytes_per_sec": 0, 00:15:29.490 "r_mbytes_per_sec": 0, 00:15:29.490 "w_mbytes_per_sec": 0 00:15:29.490 }, 00:15:29.490 "claimed": false, 00:15:29.490 "zoned": false, 00:15:29.490 "supported_io_types": { 00:15:29.490 "read": true, 00:15:29.490 "write": true, 00:15:29.490 "unmap": true, 00:15:29.491 "flush": false, 00:15:29.491 "reset": true, 00:15:29.491 "nvme_admin": false, 00:15:29.491 "nvme_io": false, 00:15:29.491 "nvme_io_md": false, 00:15:29.491 "write_zeroes": true, 00:15:29.491 "zcopy": false, 00:15:29.491 "get_zone_info": false, 00:15:29.491 "zone_management": false, 00:15:29.491 "zone_append": false, 00:15:29.491 "compare": false, 00:15:29.491 "compare_and_write": false, 00:15:29.491 "abort": false, 00:15:29.491 "seek_hole": true, 00:15:29.491 "seek_data": true, 00:15:29.491 "copy": false, 00:15:29.491 "nvme_iov_md": false 00:15:29.491 }, 00:15:29.491 "driver_specific": { 00:15:29.491 "lvol": { 00:15:29.491 "lvol_store_uuid": "e59ed071-52e4-403d-86e6-f83bf193c902", 00:15:29.491 "base_bdev": "nvme0n1", 00:15:29.491 "thin_provision": true, 00:15:29.491 "num_allocated_clusters": 0, 00:15:29.491 "snapshot": false, 00:15:29.491 "clone": false, 00:15:29.491 "esnap_clone": false 00:15:29.491 } 00:15:29.491 } 00:15:29.491 } 00:15:29.491 ]' 00:15:29.491 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:29.750 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 244ab577-b0bc-4c49-81e6-155965705c43 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=244ab577-b0bc-4c49-81e6-155965705c43 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:29.750 02:26:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 244ab577-b0bc-4c49-81e6-155965705c43 00:15:30.008 02:26:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:30.008 { 00:15:30.008 "name": "244ab577-b0bc-4c49-81e6-155965705c43", 00:15:30.008 "aliases": [ 00:15:30.008 "lvs/nvme0n1p0" 00:15:30.008 ], 00:15:30.008 "product_name": "Logical Volume", 00:15:30.008 "block_size": 4096, 00:15:30.008 "num_blocks": 26476544, 00:15:30.008 "uuid": "244ab577-b0bc-4c49-81e6-155965705c43", 00:15:30.008 "assigned_rate_limits": { 00:15:30.008 "rw_ios_per_sec": 0, 00:15:30.008 "rw_mbytes_per_sec": 0, 00:15:30.008 "r_mbytes_per_sec": 0, 00:15:30.008 "w_mbytes_per_sec": 0 00:15:30.008 }, 00:15:30.008 "claimed": false, 00:15:30.008 "zoned": false, 00:15:30.008 "supported_io_types": { 00:15:30.008 "read": true, 00:15:30.008 "write": true, 00:15:30.008 "unmap": true, 00:15:30.008 "flush": false, 00:15:30.008 "reset": true, 00:15:30.008 "nvme_admin": false, 00:15:30.008 "nvme_io": false, 00:15:30.008 "nvme_io_md": false, 00:15:30.008 "write_zeroes": true, 00:15:30.008 "zcopy": false, 00:15:30.008 "get_zone_info": false, 00:15:30.008 "zone_management": false, 00:15:30.008 "zone_append": false, 00:15:30.008 "compare": false, 00:15:30.008 "compare_and_write": false, 00:15:30.008 "abort": false, 00:15:30.008 "seek_hole": true, 00:15:30.008 "seek_data": true, 00:15:30.008 "copy": false, 00:15:30.008 "nvme_iov_md": false 00:15:30.008 }, 00:15:30.008 "driver_specific": { 00:15:30.008 "lvol": { 00:15:30.008 "lvol_store_uuid": "e59ed071-52e4-403d-86e6-f83bf193c902", 00:15:30.008 "base_bdev": "nvme0n1", 00:15:30.009 "thin_provision": true, 00:15:30.009 "num_allocated_clusters": 0, 00:15:30.009 "snapshot": false, 00:15:30.009 "clone": false, 00:15:30.009 "esnap_clone": false 00:15:30.009 } 00:15:30.009 } 00:15:30.009 } 00:15:30.009 ]' 00:15:30.009 02:26:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:30.009 02:26:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:30.009 02:26:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:30.009 02:26:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:30.009 02:26:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:30.009 02:26:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:30.009 02:26:17 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:30.009 02:26:17 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:30.009 02:26:17 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 244ab577-b0bc-4c49-81e6-155965705c43 -c nvc0n1p0 --l2p_dram_limit 60 00:15:30.268 [2024-11-04 02:26:17.286238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.268 [2024-11-04 02:26:17.286277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:30.268 [2024-11-04 02:26:17.286289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:30.268 [2024-11-04 02:26:17.286296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.268 [2024-11-04 02:26:17.286354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.268 [2024-11-04 02:26:17.286361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:30.268 [2024-11-04 02:26:17.286369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:30.268 [2024-11-04 02:26:17.286376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.268 [2024-11-04 02:26:17.286413] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:30.268 [2024-11-04 02:26:17.286992] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:30.268 [2024-11-04 02:26:17.287046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.268 [2024-11-04 02:26:17.287053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:30.268 [2024-11-04 02:26:17.287062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:15:30.268 [2024-11-04 02:26:17.287068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.268 [2024-11-04 02:26:17.287132] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bfe66912-ed70-4cc0-861d-9f5688caef5d 00:15:30.268 [2024-11-04 02:26:17.288098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.268 [2024-11-04 02:26:17.288120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:30.268 [2024-11-04 02:26:17.288129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:15:30.268 [2024-11-04 02:26:17.288137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.268 [2024-11-04 02:26:17.292790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.268 [2024-11-04 02:26:17.292816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:30.268 [2024-11-04 02:26:17.292824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.568 ms 00:15:30.268 [2024-11-04 02:26:17.292831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.268 [2024-11-04 02:26:17.292925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.268 [2024-11-04 02:26:17.292941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:30.268 [2024-11-04 02:26:17.292948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:15:30.268 [2024-11-04 02:26:17.292957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.268 [2024-11-04 02:26:17.293006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.268 [2024-11-04 02:26:17.293016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:30.268 [2024-11-04 02:26:17.293022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:30.268 [2024-11-04 02:26:17.293029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.268 [2024-11-04 02:26:17.293057] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:30.268 [2024-11-04 02:26:17.295911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.268 [2024-11-04 02:26:17.295934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:30.268 [2024-11-04 02:26:17.295944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.857 ms 00:15:30.268 [2024-11-04 02:26:17.295950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.268 [2024-11-04 02:26:17.295992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.268 [2024-11-04 02:26:17.296000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:30.268 [2024-11-04 02:26:17.296008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:30.268 [2024-11-04 02:26:17.296013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.268 [2024-11-04 02:26:17.296037] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:30.268 [2024-11-04 02:26:17.296149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:30.268 [2024-11-04 02:26:17.296167] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:30.268 [2024-11-04 02:26:17.296176] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:30.268 [2024-11-04 02:26:17.296186] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:30.268 [2024-11-04 02:26:17.296192] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:30.268 [2024-11-04 02:26:17.296199] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:30.268 [2024-11-04 02:26:17.296205] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:30.268 [2024-11-04 02:26:17.296211] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:30.268 [2024-11-04 02:26:17.296216] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:30.268 [2024-11-04 02:26:17.296224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.268 [2024-11-04 02:26:17.296229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:30.268 [2024-11-04 02:26:17.296239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:15:30.268 [2024-11-04 02:26:17.296244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.268 [2024-11-04 02:26:17.296324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.269 [2024-11-04 02:26:17.296333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:30.269 [2024-11-04 02:26:17.296340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:30.269 [2024-11-04 02:26:17.296345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.269 [2024-11-04 02:26:17.296439] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:30.269 [2024-11-04 02:26:17.296448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:30.269 [2024-11-04 02:26:17.296455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:30.269 [2024-11-04 02:26:17.296461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:30.269 [2024-11-04 02:26:17.296474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:30.269 [2024-11-04 02:26:17.296486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:30.269 [2024-11-04 02:26:17.296492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:30.269 [2024-11-04 02:26:17.296503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:30.269 [2024-11-04 02:26:17.296509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:30.269 [2024-11-04 02:26:17.296515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:30.269 [2024-11-04 02:26:17.296520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:30.269 [2024-11-04 02:26:17.296527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:30.269 [2024-11-04 02:26:17.296531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:30.269 [2024-11-04 02:26:17.296545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:30.269 [2024-11-04 02:26:17.296551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:30.269 [2024-11-04 02:26:17.296563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:30.269 [2024-11-04 02:26:17.296577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:30.269 [2024-11-04 02:26:17.296583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:30.269 [2024-11-04 02:26:17.296594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:30.269 [2024-11-04 02:26:17.296600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:30.269 [2024-11-04 02:26:17.296612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:30.269 [2024-11-04 02:26:17.296616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:30.269 [2024-11-04 02:26:17.296628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:30.269 [2024-11-04 02:26:17.296636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:30.269 [2024-11-04 02:26:17.296648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:30.269 [2024-11-04 02:26:17.296664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:30.269 [2024-11-04 02:26:17.296670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:30.269 [2024-11-04 02:26:17.296675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:30.269 [2024-11-04 02:26:17.296682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:30.269 [2024-11-04 02:26:17.296687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:30.269 [2024-11-04 02:26:17.296698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:30.269 [2024-11-04 02:26:17.296705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296709] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:30.269 [2024-11-04 02:26:17.296717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:30.269 [2024-11-04 02:26:17.296722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:30.269 [2024-11-04 02:26:17.296729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.269 [2024-11-04 02:26:17.296735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:30.269 [2024-11-04 02:26:17.296743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:30.269 [2024-11-04 02:26:17.296748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:30.269 [2024-11-04 02:26:17.296754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:30.269 [2024-11-04 02:26:17.296760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:30.269 [2024-11-04 02:26:17.296766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:30.269 [2024-11-04 02:26:17.296775] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:30.269 [2024-11-04 02:26:17.296784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:30.269 [2024-11-04 02:26:17.296790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:30.269 [2024-11-04 02:26:17.296797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:30.269 [2024-11-04 02:26:17.296802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:30.269 [2024-11-04 02:26:17.296809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:30.269 [2024-11-04 02:26:17.296814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:30.269 [2024-11-04 02:26:17.296821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:30.269 [2024-11-04 02:26:17.296827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:30.269 [2024-11-04 02:26:17.296834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:30.269 [2024-11-04 02:26:17.296840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:30.269 [2024-11-04 02:26:17.296848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:30.269 [2024-11-04 02:26:17.296854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:30.269 [2024-11-04 02:26:17.296861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:30.269 [2024-11-04 02:26:17.296879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:30.269 [2024-11-04 02:26:17.296886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:30.269 [2024-11-04 02:26:17.296892] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:30.269 [2024-11-04 02:26:17.296899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:30.269 [2024-11-04 02:26:17.296905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:30.269 [2024-11-04 02:26:17.296912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:30.269 [2024-11-04 02:26:17.296918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:30.269 [2024-11-04 02:26:17.296925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:30.269 [2024-11-04 02:26:17.296931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.269 [2024-11-04 02:26:17.296937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:30.269 [2024-11-04 02:26:17.296945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:15:30.269 [2024-11-04 02:26:17.296952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.269 [2024-11-04 02:26:17.297036] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:30.269 [2024-11-04 02:26:17.297046] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:32.872 [2024-11-04 02:26:19.755551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.755605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:32.872 [2024-11-04 02:26:19.755616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2458.505 ms 00:15:32.872 [2024-11-04 02:26:19.755627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.775974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.776014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:32.872 [2024-11-04 02:26:19.776024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.165 ms 00:15:32.872 [2024-11-04 02:26:19.776031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.776143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.776153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:32.872 [2024-11-04 02:26:19.776159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:32.872 [2024-11-04 02:26:19.776169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.809504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.809546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:32.872 [2024-11-04 02:26:19.809559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.293 ms 00:15:32.872 [2024-11-04 02:26:19.809573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.809629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.809640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:32.872 [2024-11-04 02:26:19.809649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:32.872 [2024-11-04 02:26:19.809658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.810030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.810055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:32.872 [2024-11-04 02:26:19.810065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:15:32.872 [2024-11-04 02:26:19.810075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.810206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.810269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:32.872 [2024-11-04 02:26:19.810280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:15:32.872 [2024-11-04 02:26:19.810292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.824747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.824779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:32.872 [2024-11-04 02:26:19.824789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.427 ms 00:15:32.872 [2024-11-04 02:26:19.824798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.834196] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:32.872 [2024-11-04 02:26:19.846235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.846269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:32.872 [2024-11-04 02:26:19.846279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.305 ms 00:15:32.872 [2024-11-04 02:26:19.846284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.893842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.894018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:32.872 [2024-11-04 02:26:19.894042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.521 ms 00:15:32.872 [2024-11-04 02:26:19.894051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.894479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.894514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:32.872 [2024-11-04 02:26:19.894530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:15:32.872 [2024-11-04 02:26:19.894538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.918802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.918836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:32.872 [2024-11-04 02:26:19.918849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.184 ms 00:15:32.872 [2024-11-04 02:26:19.918859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.941377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.941502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:32.872 [2024-11-04 02:26:19.941523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.456 ms 00:15:32.872 [2024-11-04 02:26:19.941530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:32.872 [2024-11-04 02:26:19.942135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:32.872 [2024-11-04 02:26:19.942154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:32.872 [2024-11-04 02:26:19.942165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:15:32.872 [2024-11-04 02:26:19.942173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.133 [2024-11-04 02:26:20.005915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.133 [2024-11-04 02:26:20.005951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:33.133 [2024-11-04 02:26:20.005968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.695 ms 00:15:33.133 [2024-11-04 02:26:20.005988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.133 [2024-11-04 02:26:20.030741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.133 [2024-11-04 02:26:20.030776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:33.133 [2024-11-04 02:26:20.030789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.657 ms 00:15:33.133 [2024-11-04 02:26:20.030798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.133 [2024-11-04 02:26:20.054454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.133 [2024-11-04 02:26:20.054485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:33.133 [2024-11-04 02:26:20.054497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.621 ms 00:15:33.133 [2024-11-04 02:26:20.054504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.133 [2024-11-04 02:26:20.078434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.133 [2024-11-04 02:26:20.078465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:33.133 [2024-11-04 02:26:20.078477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.898 ms 00:15:33.133 [2024-11-04 02:26:20.078484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.133 [2024-11-04 02:26:20.078521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.133 [2024-11-04 02:26:20.078529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:33.133 [2024-11-04 02:26:20.078542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:33.133 [2024-11-04 02:26:20.078550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.133 [2024-11-04 02:26:20.078646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.133 [2024-11-04 02:26:20.078655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:33.133 [2024-11-04 02:26:20.078665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:33.133 [2024-11-04 02:26:20.078672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.133 [2024-11-04 02:26:20.079597] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2792.922 ms, result 0 00:15:33.133 { 00:15:33.133 "name": "ftl0", 00:15:33.133 "uuid": "bfe66912-ed70-4cc0-861d-9f5688caef5d" 00:15:33.133 } 00:15:33.133 02:26:20 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:33.133 02:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:15:33.133 02:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:33.133 02:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:15:33.133 02:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:33.133 02:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:33.133 02:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:33.394 02:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:33.394 [ 00:15:33.394 { 00:15:33.394 "name": "ftl0", 00:15:33.394 "aliases": [ 00:15:33.394 "bfe66912-ed70-4cc0-861d-9f5688caef5d" 00:15:33.394 ], 00:15:33.394 "product_name": "FTL disk", 00:15:33.394 "block_size": 4096, 00:15:33.394 "num_blocks": 20971520, 00:15:33.394 "uuid": "bfe66912-ed70-4cc0-861d-9f5688caef5d", 00:15:33.394 "assigned_rate_limits": { 00:15:33.394 "rw_ios_per_sec": 0, 00:15:33.394 "rw_mbytes_per_sec": 0, 00:15:33.394 "r_mbytes_per_sec": 0, 00:15:33.394 "w_mbytes_per_sec": 0 00:15:33.394 }, 00:15:33.394 "claimed": false, 00:15:33.394 "zoned": false, 00:15:33.394 "supported_io_types": { 00:15:33.394 "read": true, 00:15:33.394 "write": true, 00:15:33.394 "unmap": true, 00:15:33.394 "flush": true, 00:15:33.394 "reset": false, 00:15:33.394 "nvme_admin": false, 00:15:33.394 "nvme_io": false, 00:15:33.394 "nvme_io_md": false, 00:15:33.394 "write_zeroes": true, 00:15:33.394 "zcopy": false, 00:15:33.394 "get_zone_info": false, 00:15:33.394 "zone_management": false, 00:15:33.394 "zone_append": false, 00:15:33.394 "compare": false, 00:15:33.394 "compare_and_write": false, 00:15:33.394 "abort": false, 00:15:33.394 "seek_hole": false, 00:15:33.394 "seek_data": false, 00:15:33.394 "copy": false, 00:15:33.394 "nvme_iov_md": false 00:15:33.394 }, 00:15:33.394 "driver_specific": { 00:15:33.394 "ftl": { 00:15:33.394 "base_bdev": "244ab577-b0bc-4c49-81e6-155965705c43", 00:15:33.394 "cache": "nvc0n1p0" 00:15:33.394 } 00:15:33.394 } 00:15:33.394 } 00:15:33.394 ] 00:15:33.394 02:26:20 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:15:33.394 02:26:20 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:33.395 02:26:20 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:33.657 02:26:20 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:33.657 02:26:20 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:33.918 [2024-11-04 02:26:20.885154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.918 [2024-11-04 02:26:20.885197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:33.918 [2024-11-04 02:26:20.885210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:33.918 [2024-11-04 02:26:20.885219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.918 [2024-11-04 02:26:20.885255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:33.918 [2024-11-04 02:26:20.887897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.919 [2024-11-04 02:26:20.887925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:33.919 [2024-11-04 02:26:20.887936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.623 ms 00:15:33.919 [2024-11-04 02:26:20.887944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.919 [2024-11-04 02:26:20.888486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.919 [2024-11-04 02:26:20.888499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:33.919 [2024-11-04 02:26:20.888510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:15:33.919 [2024-11-04 02:26:20.888517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.919 [2024-11-04 02:26:20.891768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.919 [2024-11-04 02:26:20.891886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:33.919 [2024-11-04 02:26:20.891906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:15:33.919 [2024-11-04 02:26:20.891913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.919 [2024-11-04 02:26:20.898078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.919 [2024-11-04 02:26:20.898102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:33.919 [2024-11-04 02:26:20.898114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.130 ms 00:15:33.919 [2024-11-04 02:26:20.898121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.919 [2024-11-04 02:26:20.922197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.919 [2024-11-04 02:26:20.922228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:33.919 [2024-11-04 02:26:20.922241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.983 ms 00:15:33.919 [2024-11-04 02:26:20.922247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.919 [2024-11-04 02:26:20.938044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.919 [2024-11-04 02:26:20.938073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:33.919 [2024-11-04 02:26:20.938086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.725 ms 00:15:33.919 [2024-11-04 02:26:20.938094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.919 [2024-11-04 02:26:20.938324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.919 [2024-11-04 02:26:20.938334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:33.919 [2024-11-04 02:26:20.938344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:15:33.919 [2024-11-04 02:26:20.938351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.919 [2024-11-04 02:26:20.961998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.919 [2024-11-04 02:26:20.962027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:33.919 [2024-11-04 02:26:20.962040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.616 ms 00:15:33.919 [2024-11-04 02:26:20.962048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.919 [2024-11-04 02:26:20.985540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.919 [2024-11-04 02:26:20.985569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:33.919 [2024-11-04 02:26:20.985580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.446 ms 00:15:33.919 [2024-11-04 02:26:20.985587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.919 [2024-11-04 02:26:21.008477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.919 [2024-11-04 02:26:21.008506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:33.919 [2024-11-04 02:26:21.008518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.845 ms 00:15:33.919 [2024-11-04 02:26:21.008524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.179 [2024-11-04 02:26:21.031423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.179 [2024-11-04 02:26:21.031539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:34.179 [2024-11-04 02:26:21.031559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.786 ms 00:15:34.179 [2024-11-04 02:26:21.031566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.179 [2024-11-04 02:26:21.031604] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:34.179 [2024-11-04 02:26:21.031618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:34.179 [2024-11-04 02:26:21.031629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.031993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:34.180 [2024-11-04 02:26:21.032419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:34.181 [2024-11-04 02:26:21.032426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:34.181 [2024-11-04 02:26:21.032435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:34.181 [2024-11-04 02:26:21.032443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:34.181 [2024-11-04 02:26:21.032453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:34.181 [2024-11-04 02:26:21.032461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:34.181 [2024-11-04 02:26:21.032470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:34.181 [2024-11-04 02:26:21.032477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:34.181 [2024-11-04 02:26:21.032486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:34.181 [2024-11-04 02:26:21.032493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:34.181 [2024-11-04 02:26:21.032503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:34.181 [2024-11-04 02:26:21.032519] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:34.181 [2024-11-04 02:26:21.032528] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bfe66912-ed70-4cc0-861d-9f5688caef5d 00:15:34.181 [2024-11-04 02:26:21.032536] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:34.181 [2024-11-04 02:26:21.032547] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:34.181 [2024-11-04 02:26:21.032554] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:34.181 [2024-11-04 02:26:21.032562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:34.181 [2024-11-04 02:26:21.032569] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:34.181 [2024-11-04 02:26:21.032580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:34.181 [2024-11-04 02:26:21.032587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:34.181 [2024-11-04 02:26:21.032595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:34.181 [2024-11-04 02:26:21.032601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:34.181 [2024-11-04 02:26:21.032609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.181 [2024-11-04 02:26:21.032617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:34.181 [2024-11-04 02:26:21.032626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:15:34.181 [2024-11-04 02:26:21.032633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.045104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.181 [2024-11-04 02:26:21.045131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:34.181 [2024-11-04 02:26:21.045143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.430 ms 00:15:34.181 [2024-11-04 02:26:21.045152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.045504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.181 [2024-11-04 02:26:21.045513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:34.181 [2024-11-04 02:26:21.045522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:15:34.181 [2024-11-04 02:26:21.045529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.089649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.089681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:34.181 [2024-11-04 02:26:21.089694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.089702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.089765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.089773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:34.181 [2024-11-04 02:26:21.089783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.089790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.089897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.089908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:34.181 [2024-11-04 02:26:21.089918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.089927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.089969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.089977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:34.181 [2024-11-04 02:26:21.089987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.089994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.171232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.171275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:34.181 [2024-11-04 02:26:21.171287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.171298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.234826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.234863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:34.181 [2024-11-04 02:26:21.234890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.234898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.234971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.234981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:34.181 [2024-11-04 02:26:21.234990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.234998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.235080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.235090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:34.181 [2024-11-04 02:26:21.235099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.235106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.235220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.235230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:34.181 [2024-11-04 02:26:21.235240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.235247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.235299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.235309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:34.181 [2024-11-04 02:26:21.235321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.235328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.235373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.235381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:34.181 [2024-11-04 02:26:21.235391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.235398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.235458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.181 [2024-11-04 02:26:21.235467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:34.181 [2024-11-04 02:26:21.235477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.181 [2024-11-04 02:26:21.235484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.181 [2024-11-04 02:26:21.235650] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.475 ms, result 0 00:15:34.181 true 00:15:34.181 02:26:21 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72261 00:15:34.181 02:26:21 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 72261 ']' 00:15:34.181 02:26:21 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 72261 00:15:34.181 02:26:21 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:15:34.181 02:26:21 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:34.181 02:26:21 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72261 00:15:34.442 02:26:21 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:34.442 02:26:21 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:34.442 02:26:21 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72261' 00:15:34.442 killing process with pid 72261 00:15:34.442 02:26:21 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 72261 00:15:34.442 02:26:21 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 72261 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.028 02:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:41.028 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:41.028 fio-3.35 00:15:41.028 Starting 1 thread 00:15:46.312 00:15:46.312 test: (groupid=0, jobs=1): err= 0: pid=72451: Mon Nov 4 02:26:32 2024 00:15:46.312 read: IOPS=942, BW=62.6MiB/s (65.7MB/s)(255MiB/4065msec) 00:15:46.312 slat (nsec): min=4031, max=29927, avg=6103.15, stdev=2766.57 00:15:46.312 clat (usec): min=259, max=1721, avg=477.83, stdev=235.59 00:15:46.312 lat (usec): min=263, max=1743, avg=483.93, stdev=236.77 00:15:46.312 clat percentiles (usec): 00:15:46.312 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 322], 20.00th=[ 326], 00:15:46.312 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 351], 00:15:46.312 | 70.00th=[ 465], 80.00th=[ 750], 90.00th=[ 898], 95.00th=[ 938], 00:15:46.312 | 99.00th=[ 1090], 99.50th=[ 1188], 99.90th=[ 1532], 99.95th=[ 1598], 00:15:46.312 | 99.99th=[ 1729] 00:15:46.312 write: IOPS=949, BW=63.0MiB/s (66.1MB/s)(256MiB/4062msec); 0 zone resets 00:15:46.312 slat (nsec): min=14725, max=63783, avg=20033.75, stdev=4516.71 00:15:46.312 clat (usec): min=308, max=4015, avg=539.63, stdev=296.37 00:15:46.312 lat (usec): min=325, max=4036, avg=559.66, stdev=298.64 00:15:46.312 clat percentiles (usec): 00:15:46.312 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 355], 00:15:46.312 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 388], 00:15:46.312 | 70.00th=[ 562], 80.00th=[ 857], 90.00th=[ 988], 95.00th=[ 1045], 00:15:46.312 | 99.00th=[ 1549], 99.50th=[ 1745], 99.90th=[ 2409], 99.95th=[ 2442], 00:15:46.312 | 99.99th=[ 4015] 00:15:46.312 bw ( KiB/s): min=35224, max=91120, per=99.31%, avg=64107.00, stdev=25495.65, samples=8 00:15:46.312 iops : min= 518, max= 1340, avg=942.75, stdev=374.94, samples=8 00:15:46.312 lat (usec) : 500=69.36%, 750=9.47%, 1000=16.26% 00:15:46.312 lat (msec) : 2=4.86%, 4=0.04%, 10=0.01% 00:15:46.312 cpu : usr=99.16%, sys=0.07%, ctx=5, majf=0, minf=1169 00:15:46.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.312 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:46.312 00:15:46.312 Run status group 0 (all jobs): 00:15:46.312 READ: bw=62.6MiB/s (65.7MB/s), 62.6MiB/s-62.6MiB/s (65.7MB/s-65.7MB/s), io=255MiB (267MB), run=4065-4065msec 00:15:46.312 WRITE: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=256MiB (269MB), run=4062-4062msec 00:15:47.698 ----------------------------------------------------- 00:15:47.698 Suppressions used: 00:15:47.698 count bytes template 00:15:47.698 1 5 /usr/src/fio/parse.c 00:15:47.698 1 8 libtcmalloc_minimal.so 00:15:47.698 1 904 libcrypto.so 00:15:47.698 ----------------------------------------------------- 00:15:47.698 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:47.698 02:26:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:47.698 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:47.698 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:47.698 fio-3.35 00:15:47.698 Starting 2 threads 00:16:19.810 00:16:19.810 first_half: (groupid=0, jobs=1): err= 0: pid=72559: Mon Nov 4 02:27:01 2024 00:16:19.810 read: IOPS=2578, BW=10.1MiB/s (10.6MB/s)(255MiB/25303msec) 00:16:19.810 slat (usec): min=3, max=490, avg= 4.92, stdev= 2.43 00:16:19.810 clat (usec): min=585, max=404224, avg=38333.27, stdev=24221.31 00:16:19.810 lat (usec): min=589, max=404229, avg=38338.20, stdev=24221.45 00:16:19.810 clat percentiles (msec): 00:16:19.810 | 1.00th=[ 10], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 31], 00:16:19.810 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 35], 00:16:19.810 | 70.00th=[ 36], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 58], 00:16:19.810 | 99.00th=[ 169], 99.50th=[ 209], 99.90th=[ 279], 99.95th=[ 305], 00:16:19.810 | 99.99th=[ 388] 00:16:19.810 write: IOPS=3106, BW=12.1MiB/s (12.7MB/s)(256MiB/21096msec); 0 zone resets 00:16:19.810 slat (usec): min=3, max=1015, avg= 6.84, stdev=11.32 00:16:19.810 clat (usec): min=371, max=125862, avg=11236.71, stdev=17676.17 00:16:19.810 lat (usec): min=379, max=125871, avg=11243.54, stdev=17676.36 00:16:19.810 clat percentiles (usec): 00:16:19.810 | 1.00th=[ 701], 5.00th=[ 873], 10.00th=[ 1045], 20.00th=[ 1401], 00:16:19.810 | 30.00th=[ 2769], 40.00th=[ 4293], 50.00th=[ 5473], 60.00th=[ 7046], 00:16:19.810 | 70.00th=[ 10159], 80.00th=[ 15139], 90.00th=[ 21365], 95.00th=[ 58459], 00:16:19.810 | 99.00th=[106431], 99.50th=[112722], 99.90th=[120062], 99.95th=[121111], 00:16:19.810 | 99.99th=[125305] 00:16:19.810 bw ( KiB/s): min= 920, max=41632, per=100.00%, avg=26211.90, stdev=13369.15, samples=20 00:16:19.810 iops : min= 230, max=10408, avg=6552.95, stdev=3342.29, samples=20 00:16:19.810 lat (usec) : 500=0.02%, 750=1.02%, 1000=3.31% 00:16:19.810 lat (msec) : 2=8.29%, 4=6.42%, 10=16.43%, 20=9.66%, 50=48.59% 00:16:19.810 lat (msec) : 100=4.30%, 250=1.86%, 500=0.11% 00:16:19.810 cpu : usr=99.10%, sys=0.16%, ctx=40, majf=0, minf=5595 00:16:19.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:19.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.810 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.810 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.810 second_half: (groupid=0, jobs=1): err= 0: pid=72560: Mon Nov 4 02:27:01 2024 00:16:19.810 read: IOPS=2559, BW=10.00MiB/s (10.5MB/s)(255MiB/25527msec) 00:16:19.810 slat (nsec): min=2977, max=34442, avg=5203.06, stdev=1425.64 00:16:19.810 clat (usec): min=649, max=444102, avg=38214.48, stdev=28586.82 00:16:19.810 lat (usec): min=654, max=444108, avg=38219.68, stdev=28586.96 00:16:19.810 clat percentiles (msec): 00:16:19.810 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 31], 00:16:19.810 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 34], 00:16:19.810 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 44], 95.00th=[ 54], 00:16:19.810 | 99.00th=[ 192], 99.50th=[ 262], 99.90th=[ 351], 99.95th=[ 409], 00:16:19.810 | 99.99th=[ 439] 00:16:19.810 write: IOPS=2824, BW=11.0MiB/s (11.6MB/s)(256MiB/23200msec); 0 zone resets 00:16:19.810 slat (usec): min=3, max=788, avg= 6.83, stdev= 6.00 00:16:19.810 clat (usec): min=376, max=126386, avg=11743.74, stdev=18321.12 00:16:19.810 lat (usec): min=386, max=126393, avg=11750.57, stdev=18321.35 00:16:19.810 clat percentiles (usec): 00:16:19.810 | 1.00th=[ 676], 5.00th=[ 873], 10.00th=[ 1123], 20.00th=[ 1549], 00:16:19.810 | 30.00th=[ 2868], 40.00th=[ 4178], 50.00th=[ 5800], 60.00th=[ 7308], 00:16:19.810 | 70.00th=[ 9372], 80.00th=[ 15270], 90.00th=[ 26608], 95.00th=[ 60556], 00:16:19.810 | 99.00th=[107480], 99.50th=[114820], 99.90th=[120062], 99.95th=[122160], 00:16:19.810 | 99.99th=[125305] 00:16:19.810 bw ( KiB/s): min= 320, max=58888, per=89.24%, avg=20166.81, stdev=14219.79, samples=26 00:16:19.810 iops : min= 80, max=14722, avg=5041.69, stdev=3554.94, samples=26 00:16:19.810 lat (usec) : 500=0.01%, 750=1.28%, 1000=2.30% 00:16:19.810 lat (msec) : 2=8.39%, 4=7.45%, 10=16.98%, 20=8.55%, 50=49.06% 00:16:19.810 lat (msec) : 100=4.02%, 250=1.66%, 500=0.29% 00:16:19.810 cpu : usr=99.29%, sys=0.14%, ctx=73, majf=0, minf=5512 00:16:19.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:19.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.810 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.810 issued rwts: total=65327,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.810 00:16:19.810 Run status group 0 (all jobs): 00:16:19.810 READ: bw=20.0MiB/s (20.9MB/s), 10.00MiB/s-10.1MiB/s (10.5MB/s-10.6MB/s), io=510MiB (535MB), run=25303-25527msec 00:16:19.810 WRITE: bw=22.1MiB/s (23.1MB/s), 11.0MiB/s-12.1MiB/s (11.6MB/s-12.7MB/s), io=512MiB (537MB), run=21096-23200msec 00:16:19.810 ----------------------------------------------------- 00:16:19.811 Suppressions used: 00:16:19.811 count bytes template 00:16:19.811 2 10 /usr/src/fio/parse.c 00:16:19.811 2 192 /usr/src/fio/iolog.c 00:16:19.811 1 8 libtcmalloc_minimal.so 00:16:19.811 1 904 libcrypto.so 00:16:19.811 ----------------------------------------------------- 00:16:19.811 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:19.811 02:27:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:19.811 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:19.811 fio-3.35 00:16:19.811 Starting 1 thread 00:16:41.818 00:16:41.818 test: (groupid=0, jobs=1): err= 0: pid=72896: Mon Nov 4 02:27:26 2024 00:16:41.818 read: IOPS=5524, BW=21.6MiB/s (22.6MB/s)(255MiB/11802msec) 00:16:41.818 slat (usec): min=3, max=482, avg= 7.40, stdev= 3.62 00:16:41.818 clat (usec): min=1697, max=47455, avg=23156.83, stdev=3517.15 00:16:41.818 lat (usec): min=1712, max=47462, avg=23164.23, stdev=3517.71 00:16:41.818 clat percentiles (usec): 00:16:41.818 | 1.00th=[15008], 5.00th=[16712], 10.00th=[18220], 20.00th=[20841], 00:16:41.818 | 30.00th=[21890], 40.00th=[22676], 50.00th=[23462], 60.00th=[23987], 00:16:41.818 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26608], 95.00th=[28443], 00:16:41.818 | 99.00th=[33424], 99.50th=[34866], 99.90th=[39060], 99.95th=[43254], 00:16:41.818 | 99.99th=[47449] 00:16:41.818 write: IOPS=7445, BW=29.1MiB/s (30.5MB/s)(256MiB/8802msec); 0 zone resets 00:16:41.818 slat (usec): min=4, max=2369, avg=10.37, stdev=17.73 00:16:41.818 clat (usec): min=830, max=95525, avg=17102.28, stdev=20969.53 00:16:41.818 lat (usec): min=859, max=95536, avg=17112.65, stdev=20969.50 00:16:41.818 clat percentiles (usec): 00:16:41.818 | 1.00th=[ 1549], 5.00th=[ 1860], 10.00th=[ 2040], 20.00th=[ 2409], 00:16:41.818 | 30.00th=[ 2868], 40.00th=[ 3884], 50.00th=[10945], 60.00th=[13566], 00:16:41.818 | 70.00th=[15533], 80.00th=[18220], 90.00th=[62129], 95.00th=[65274], 00:16:41.818 | 99.00th=[69731], 99.50th=[71828], 99.90th=[74974], 99.95th=[78119], 00:16:41.818 | 99.99th=[92799] 00:16:41.818 bw ( KiB/s): min=16008, max=41008, per=97.80%, avg=29127.11, stdev=6061.07, samples=18 00:16:41.818 iops : min= 4002, max=10252, avg=7281.78, stdev=1515.27, samples=18 00:16:41.818 lat (usec) : 1000=0.01% 00:16:41.818 lat (msec) : 2=4.32%, 4=15.85%, 10=3.23%, 20=26.08%, 50=42.71% 00:16:41.818 lat (msec) : 100=7.81% 00:16:41.818 cpu : usr=97.72%, sys=0.45%, ctx=82, majf=0, minf=5566 00:16:41.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:41.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.818 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:41.818 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.818 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:41.818 00:16:41.818 Run status group 0 (all jobs): 00:16:41.818 READ: bw=21.6MiB/s (22.6MB/s), 21.6MiB/s-21.6MiB/s (22.6MB/s-22.6MB/s), io=255MiB (267MB), run=11802-11802msec 00:16:41.818 WRITE: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=256MiB (268MB), run=8802-8802msec 00:16:41.818 ----------------------------------------------------- 00:16:41.818 Suppressions used: 00:16:41.818 count bytes template 00:16:41.818 1 5 /usr/src/fio/parse.c 00:16:41.818 2 192 /usr/src/fio/iolog.c 00:16:41.818 1 8 libtcmalloc_minimal.so 00:16:41.818 1 904 libcrypto.so 00:16:41.818 ----------------------------------------------------- 00:16:41.818 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:41.818 Remove shared memory files 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57117 /dev/shm/spdk_tgt_trace.pid71185 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:41.818 ************************************ 00:16:41.818 END TEST ftl_fio_basic 00:16:41.818 ************************************ 00:16:41.818 00:16:41.818 real 1m14.161s 00:16:41.818 user 2m36.280s 00:16:41.818 sys 0m3.335s 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:41.818 02:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:41.818 02:27:27 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:41.818 02:27:27 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:41.818 02:27:27 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:41.818 02:27:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:41.818 ************************************ 00:16:41.818 START TEST ftl_bdevperf 00:16:41.818 ************************************ 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:41.818 * Looking for test storage... 00:16:41.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.818 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:41.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.819 --rc genhtml_branch_coverage=1 00:16:41.819 --rc genhtml_function_coverage=1 00:16:41.819 --rc genhtml_legend=1 00:16:41.819 --rc geninfo_all_blocks=1 00:16:41.819 --rc geninfo_unexecuted_blocks=1 00:16:41.819 00:16:41.819 ' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:41.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.819 --rc genhtml_branch_coverage=1 00:16:41.819 --rc genhtml_function_coverage=1 00:16:41.819 --rc genhtml_legend=1 00:16:41.819 --rc geninfo_all_blocks=1 00:16:41.819 --rc geninfo_unexecuted_blocks=1 00:16:41.819 00:16:41.819 ' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:41.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.819 --rc genhtml_branch_coverage=1 00:16:41.819 --rc genhtml_function_coverage=1 00:16:41.819 --rc genhtml_legend=1 00:16:41.819 --rc geninfo_all_blocks=1 00:16:41.819 --rc geninfo_unexecuted_blocks=1 00:16:41.819 00:16:41.819 ' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:41.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.819 --rc genhtml_branch_coverage=1 00:16:41.819 --rc genhtml_function_coverage=1 00:16:41.819 --rc genhtml_legend=1 00:16:41.819 --rc geninfo_all_blocks=1 00:16:41.819 --rc geninfo_unexecuted_blocks=1 00:16:41.819 00:16:41.819 ' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73206 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73206 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 73206 ']' 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:41.819 02:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:41.819 [2024-11-04 02:27:27.902239] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:16:41.819 [2024-11-04 02:27:27.902571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73206 ] 00:16:41.819 [2024-11-04 02:27:28.068250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.819 [2024-11-04 02:27:28.164132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.819 02:27:28 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:41.819 02:27:28 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:16:41.819 02:27:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:41.819 02:27:28 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:41.819 02:27:28 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:41.819 02:27:28 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:41.819 02:27:28 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:41.819 02:27:28 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:42.080 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:42.080 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:42.080 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:42.080 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:42.080 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:42.080 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:42.080 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:42.081 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:42.342 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:42.342 { 00:16:42.342 "name": "nvme0n1", 00:16:42.342 "aliases": [ 00:16:42.342 "186ad7ec-5bee-44af-81f9-dfaf77ea2482" 00:16:42.342 ], 00:16:42.342 "product_name": "NVMe disk", 00:16:42.342 "block_size": 4096, 00:16:42.342 "num_blocks": 1310720, 00:16:42.342 "uuid": "186ad7ec-5bee-44af-81f9-dfaf77ea2482", 00:16:42.342 "numa_id": -1, 00:16:42.342 "assigned_rate_limits": { 00:16:42.342 "rw_ios_per_sec": 0, 00:16:42.342 "rw_mbytes_per_sec": 0, 00:16:42.342 "r_mbytes_per_sec": 0, 00:16:42.342 "w_mbytes_per_sec": 0 00:16:42.342 }, 00:16:42.342 "claimed": true, 00:16:42.342 "claim_type": "read_many_write_one", 00:16:42.342 "zoned": false, 00:16:42.342 "supported_io_types": { 00:16:42.342 "read": true, 00:16:42.342 "write": true, 00:16:42.342 "unmap": true, 00:16:42.342 "flush": true, 00:16:42.342 "reset": true, 00:16:42.342 "nvme_admin": true, 00:16:42.342 "nvme_io": true, 00:16:42.342 "nvme_io_md": false, 00:16:42.342 "write_zeroes": true, 00:16:42.342 "zcopy": false, 00:16:42.342 "get_zone_info": false, 00:16:42.342 "zone_management": false, 00:16:42.342 "zone_append": false, 00:16:42.342 "compare": true, 00:16:42.342 "compare_and_write": false, 00:16:42.342 "abort": true, 00:16:42.342 "seek_hole": false, 00:16:42.342 "seek_data": false, 00:16:42.342 "copy": true, 00:16:42.342 "nvme_iov_md": false 00:16:42.342 }, 00:16:42.342 "driver_specific": { 00:16:42.342 "nvme": [ 00:16:42.342 { 00:16:42.342 "pci_address": "0000:00:11.0", 00:16:42.342 "trid": { 00:16:42.342 "trtype": "PCIe", 00:16:42.342 "traddr": "0000:00:11.0" 00:16:42.342 }, 00:16:42.342 "ctrlr_data": { 00:16:42.342 "cntlid": 0, 00:16:42.342 "vendor_id": "0x1b36", 00:16:42.342 "model_number": "QEMU NVMe Ctrl", 00:16:42.342 "serial_number": "12341", 00:16:42.342 "firmware_revision": "8.0.0", 00:16:42.342 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:42.342 "oacs": { 00:16:42.342 "security": 0, 00:16:42.343 "format": 1, 00:16:42.343 "firmware": 0, 00:16:42.343 "ns_manage": 1 00:16:42.343 }, 00:16:42.343 "multi_ctrlr": false, 00:16:42.343 "ana_reporting": false 00:16:42.343 }, 00:16:42.343 "vs": { 00:16:42.343 "nvme_version": "1.4" 00:16:42.343 }, 00:16:42.343 "ns_data": { 00:16:42.343 "id": 1, 00:16:42.343 "can_share": false 00:16:42.343 } 00:16:42.343 } 00:16:42.343 ], 00:16:42.343 "mp_policy": "active_passive" 00:16:42.343 } 00:16:42.343 } 00:16:42.343 ]' 00:16:42.343 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:42.343 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:42.343 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:42.343 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:42.343 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:42.343 02:27:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:16:42.343 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:42.343 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:42.343 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:42.343 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:42.343 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:42.604 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e59ed071-52e4-403d-86e6-f83bf193c902 00:16:42.604 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:42.604 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e59ed071-52e4-403d-86e6-f83bf193c902 00:16:42.865 02:27:29 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:43.127 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c64443ee-5358-454c-a119-5df22f07c92a 00:16:43.127 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c64443ee-5358-454c-a119-5df22f07c92a 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:43.388 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:43.650 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:43.650 { 00:16:43.650 "name": "c5fee986-b678-4e58-be6b-fb7cf10d18a3", 00:16:43.650 "aliases": [ 00:16:43.650 "lvs/nvme0n1p0" 00:16:43.650 ], 00:16:43.650 "product_name": "Logical Volume", 00:16:43.650 "block_size": 4096, 00:16:43.650 "num_blocks": 26476544, 00:16:43.650 "uuid": "c5fee986-b678-4e58-be6b-fb7cf10d18a3", 00:16:43.650 "assigned_rate_limits": { 00:16:43.650 "rw_ios_per_sec": 0, 00:16:43.650 "rw_mbytes_per_sec": 0, 00:16:43.650 "r_mbytes_per_sec": 0, 00:16:43.650 "w_mbytes_per_sec": 0 00:16:43.650 }, 00:16:43.650 "claimed": false, 00:16:43.650 "zoned": false, 00:16:43.650 "supported_io_types": { 00:16:43.650 "read": true, 00:16:43.650 "write": true, 00:16:43.650 "unmap": true, 00:16:43.650 "flush": false, 00:16:43.650 "reset": true, 00:16:43.650 "nvme_admin": false, 00:16:43.650 "nvme_io": false, 00:16:43.650 "nvme_io_md": false, 00:16:43.650 "write_zeroes": true, 00:16:43.650 "zcopy": false, 00:16:43.650 "get_zone_info": false, 00:16:43.650 "zone_management": false, 00:16:43.650 "zone_append": false, 00:16:43.650 "compare": false, 00:16:43.650 "compare_and_write": false, 00:16:43.650 "abort": false, 00:16:43.650 "seek_hole": true, 00:16:43.650 "seek_data": true, 00:16:43.650 "copy": false, 00:16:43.650 "nvme_iov_md": false 00:16:43.650 }, 00:16:43.650 "driver_specific": { 00:16:43.650 "lvol": { 00:16:43.650 "lvol_store_uuid": "c64443ee-5358-454c-a119-5df22f07c92a", 00:16:43.650 "base_bdev": "nvme0n1", 00:16:43.650 "thin_provision": true, 00:16:43.650 "num_allocated_clusters": 0, 00:16:43.650 "snapshot": false, 00:16:43.650 "clone": false, 00:16:43.650 "esnap_clone": false 00:16:43.650 } 00:16:43.650 } 00:16:43.650 } 00:16:43.650 ]' 00:16:43.650 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:43.650 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:43.650 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:43.650 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:43.650 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:43.650 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:43.650 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:43.650 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:43.650 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:43.912 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:43.912 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:43.912 02:27:30 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:43.912 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:43.912 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:43.912 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:43.912 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:43.912 02:27:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:44.173 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:44.173 { 00:16:44.173 "name": "c5fee986-b678-4e58-be6b-fb7cf10d18a3", 00:16:44.174 "aliases": [ 00:16:44.174 "lvs/nvme0n1p0" 00:16:44.174 ], 00:16:44.174 "product_name": "Logical Volume", 00:16:44.174 "block_size": 4096, 00:16:44.174 "num_blocks": 26476544, 00:16:44.174 "uuid": "c5fee986-b678-4e58-be6b-fb7cf10d18a3", 00:16:44.174 "assigned_rate_limits": { 00:16:44.174 "rw_ios_per_sec": 0, 00:16:44.174 "rw_mbytes_per_sec": 0, 00:16:44.174 "r_mbytes_per_sec": 0, 00:16:44.174 "w_mbytes_per_sec": 0 00:16:44.174 }, 00:16:44.174 "claimed": false, 00:16:44.174 "zoned": false, 00:16:44.174 "supported_io_types": { 00:16:44.174 "read": true, 00:16:44.174 "write": true, 00:16:44.174 "unmap": true, 00:16:44.174 "flush": false, 00:16:44.174 "reset": true, 00:16:44.174 "nvme_admin": false, 00:16:44.174 "nvme_io": false, 00:16:44.174 "nvme_io_md": false, 00:16:44.174 "write_zeroes": true, 00:16:44.174 "zcopy": false, 00:16:44.174 "get_zone_info": false, 00:16:44.174 "zone_management": false, 00:16:44.174 "zone_append": false, 00:16:44.174 "compare": false, 00:16:44.174 "compare_and_write": false, 00:16:44.174 "abort": false, 00:16:44.174 "seek_hole": true, 00:16:44.174 "seek_data": true, 00:16:44.174 "copy": false, 00:16:44.174 "nvme_iov_md": false 00:16:44.174 }, 00:16:44.174 "driver_specific": { 00:16:44.174 "lvol": { 00:16:44.174 "lvol_store_uuid": "c64443ee-5358-454c-a119-5df22f07c92a", 00:16:44.174 "base_bdev": "nvme0n1", 00:16:44.174 "thin_provision": true, 00:16:44.174 "num_allocated_clusters": 0, 00:16:44.174 "snapshot": false, 00:16:44.174 "clone": false, 00:16:44.174 "esnap_clone": false 00:16:44.174 } 00:16:44.174 } 00:16:44.174 } 00:16:44.174 ]' 00:16:44.174 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:44.174 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:44.174 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:44.174 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:44.174 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:44.174 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:44.174 02:27:31 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:44.174 02:27:31 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:44.435 02:27:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:44.436 02:27:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:44.436 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:44.436 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:44.436 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:44.436 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:44.436 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5fee986-b678-4e58-be6b-fb7cf10d18a3 00:16:44.697 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:44.697 { 00:16:44.697 "name": "c5fee986-b678-4e58-be6b-fb7cf10d18a3", 00:16:44.697 "aliases": [ 00:16:44.697 "lvs/nvme0n1p0" 00:16:44.697 ], 00:16:44.697 "product_name": "Logical Volume", 00:16:44.697 "block_size": 4096, 00:16:44.697 "num_blocks": 26476544, 00:16:44.697 "uuid": "c5fee986-b678-4e58-be6b-fb7cf10d18a3", 00:16:44.697 "assigned_rate_limits": { 00:16:44.697 "rw_ios_per_sec": 0, 00:16:44.697 "rw_mbytes_per_sec": 0, 00:16:44.697 "r_mbytes_per_sec": 0, 00:16:44.697 "w_mbytes_per_sec": 0 00:16:44.697 }, 00:16:44.697 "claimed": false, 00:16:44.697 "zoned": false, 00:16:44.697 "supported_io_types": { 00:16:44.697 "read": true, 00:16:44.697 "write": true, 00:16:44.697 "unmap": true, 00:16:44.697 "flush": false, 00:16:44.697 "reset": true, 00:16:44.697 "nvme_admin": false, 00:16:44.697 "nvme_io": false, 00:16:44.697 "nvme_io_md": false, 00:16:44.697 "write_zeroes": true, 00:16:44.697 "zcopy": false, 00:16:44.697 "get_zone_info": false, 00:16:44.697 "zone_management": false, 00:16:44.697 "zone_append": false, 00:16:44.697 "compare": false, 00:16:44.697 "compare_and_write": false, 00:16:44.697 "abort": false, 00:16:44.697 "seek_hole": true, 00:16:44.697 "seek_data": true, 00:16:44.697 "copy": false, 00:16:44.697 "nvme_iov_md": false 00:16:44.697 }, 00:16:44.697 "driver_specific": { 00:16:44.697 "lvol": { 00:16:44.697 "lvol_store_uuid": "c64443ee-5358-454c-a119-5df22f07c92a", 00:16:44.697 "base_bdev": "nvme0n1", 00:16:44.697 "thin_provision": true, 00:16:44.697 "num_allocated_clusters": 0, 00:16:44.697 "snapshot": false, 00:16:44.697 "clone": false, 00:16:44.697 "esnap_clone": false 00:16:44.697 } 00:16:44.697 } 00:16:44.697 } 00:16:44.697 ]' 00:16:44.697 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:44.697 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:44.697 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:44.697 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:44.697 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:44.697 02:27:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:44.697 02:27:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:44.697 02:27:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c5fee986-b678-4e58-be6b-fb7cf10d18a3 -c nvc0n1p0 --l2p_dram_limit 20 00:16:44.960 [2024-11-04 02:27:31.861513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.960 [2024-11-04 02:27:31.861581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:44.960 [2024-11-04 02:27:31.861597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:44.960 [2024-11-04 02:27:31.861609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.960 [2024-11-04 02:27:31.861683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.960 [2024-11-04 02:27:31.861696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.960 [2024-11-04 02:27:31.861706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:44.960 [2024-11-04 02:27:31.861719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.960 [2024-11-04 02:27:31.861739] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:44.960 [2024-11-04 02:27:31.862626] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:44.960 [2024-11-04 02:27:31.862662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.960 [2024-11-04 02:27:31.862675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.960 [2024-11-04 02:27:31.862685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:16:44.960 [2024-11-04 02:27:31.862696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.960 [2024-11-04 02:27:31.862785] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 87c0b6e4-dcd7-479e-846a-febf53a2588d 00:16:44.960 [2024-11-04 02:27:31.864798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.960 [2024-11-04 02:27:31.864849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:44.960 [2024-11-04 02:27:31.864878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:44.960 [2024-11-04 02:27:31.864890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.960 [2024-11-04 02:27:31.873788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.960 [2024-11-04 02:27:31.873834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.960 [2024-11-04 02:27:31.873849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.844 ms 00:16:44.960 [2024-11-04 02:27:31.873857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.960 [2024-11-04 02:27:31.873980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.960 [2024-11-04 02:27:31.873991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.960 [2024-11-04 02:27:31.874010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:44.960 [2024-11-04 02:27:31.874018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.960 [2024-11-04 02:27:31.874110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.960 [2024-11-04 02:27:31.874120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:44.960 [2024-11-04 02:27:31.874131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:44.960 [2024-11-04 02:27:31.874138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.960 [2024-11-04 02:27:31.874164] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:44.960 [2024-11-04 02:27:31.878615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.960 [2024-11-04 02:27:31.878661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.960 [2024-11-04 02:27:31.878671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.461 ms 00:16:44.960 [2024-11-04 02:27:31.878683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.960 [2024-11-04 02:27:31.878721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.960 [2024-11-04 02:27:31.878735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:44.960 [2024-11-04 02:27:31.878744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:44.960 [2024-11-04 02:27:31.878754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.960 [2024-11-04 02:27:31.878789] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:44.960 [2024-11-04 02:27:31.878958] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:44.960 [2024-11-04 02:27:31.878975] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:44.960 [2024-11-04 02:27:31.878989] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:44.960 [2024-11-04 02:27:31.879001] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:44.960 [2024-11-04 02:27:31.879012] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:44.960 [2024-11-04 02:27:31.879021] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:44.960 [2024-11-04 02:27:31.879031] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:44.960 [2024-11-04 02:27:31.879039] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:44.960 [2024-11-04 02:27:31.879050] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:44.960 [2024-11-04 02:27:31.879060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.960 [2024-11-04 02:27:31.879072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:44.960 [2024-11-04 02:27:31.879080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:16:44.960 [2024-11-04 02:27:31.879093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.960 [2024-11-04 02:27:31.879175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.960 [2024-11-04 02:27:31.879189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:44.960 [2024-11-04 02:27:31.879197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:44.960 [2024-11-04 02:27:31.879209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.960 [2024-11-04 02:27:31.879300] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:44.960 [2024-11-04 02:27:31.879313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:44.960 [2024-11-04 02:27:31.879322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.960 [2024-11-04 02:27:31.879333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.960 [2024-11-04 02:27:31.879345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:44.960 [2024-11-04 02:27:31.879355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:44.960 [2024-11-04 02:27:31.879362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:44.960 [2024-11-04 02:27:31.879374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:44.960 [2024-11-04 02:27:31.879381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:44.960 [2024-11-04 02:27:31.879392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.960 [2024-11-04 02:27:31.879399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:44.960 [2024-11-04 02:27:31.879408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:44.960 [2024-11-04 02:27:31.879416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.960 [2024-11-04 02:27:31.879434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:44.960 [2024-11-04 02:27:31.879441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:44.960 [2024-11-04 02:27:31.879451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.960 [2024-11-04 02:27:31.879458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:44.960 [2024-11-04 02:27:31.879467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:44.960 [2024-11-04 02:27:31.879474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.960 [2024-11-04 02:27:31.879485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:44.960 [2024-11-04 02:27:31.879491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:44.960 [2024-11-04 02:27:31.879504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.960 [2024-11-04 02:27:31.879512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:44.960 [2024-11-04 02:27:31.879521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:44.960 [2024-11-04 02:27:31.879529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.960 [2024-11-04 02:27:31.879540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:44.960 [2024-11-04 02:27:31.879547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:44.960 [2024-11-04 02:27:31.879557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.960 [2024-11-04 02:27:31.879565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:44.960 [2024-11-04 02:27:31.879574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:44.960 [2024-11-04 02:27:31.879582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.960 [2024-11-04 02:27:31.879594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:44.960 [2024-11-04 02:27:31.879602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:44.960 [2024-11-04 02:27:31.879611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.960 [2024-11-04 02:27:31.879619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:44.960 [2024-11-04 02:27:31.879628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:44.960 [2024-11-04 02:27:31.879637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.960 [2024-11-04 02:27:31.879646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:44.961 [2024-11-04 02:27:31.879654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:44.961 [2024-11-04 02:27:31.879666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.961 [2024-11-04 02:27:31.879673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:44.961 [2024-11-04 02:27:31.879683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:44.961 [2024-11-04 02:27:31.879690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.961 [2024-11-04 02:27:31.879698] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:44.961 [2024-11-04 02:27:31.879707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:44.961 [2024-11-04 02:27:31.879735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.961 [2024-11-04 02:27:31.879743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.961 [2024-11-04 02:27:31.879757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:44.961 [2024-11-04 02:27:31.879764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:44.961 [2024-11-04 02:27:31.879773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:44.961 [2024-11-04 02:27:31.879781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:44.961 [2024-11-04 02:27:31.879791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:44.961 [2024-11-04 02:27:31.879797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:44.961 [2024-11-04 02:27:31.879812] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:44.961 [2024-11-04 02:27:31.879823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.961 [2024-11-04 02:27:31.879834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:44.961 [2024-11-04 02:27:31.879842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:44.961 [2024-11-04 02:27:31.879852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:44.961 [2024-11-04 02:27:31.879859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:44.961 [2024-11-04 02:27:31.879888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:44.961 [2024-11-04 02:27:31.879895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:44.961 [2024-11-04 02:27:31.879906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:44.961 [2024-11-04 02:27:31.879913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:44.961 [2024-11-04 02:27:31.879926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:44.961 [2024-11-04 02:27:31.879933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:44.961 [2024-11-04 02:27:31.879942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:44.961 [2024-11-04 02:27:31.879950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:44.961 [2024-11-04 02:27:31.879959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:44.961 [2024-11-04 02:27:31.879967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:44.961 [2024-11-04 02:27:31.879976] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:44.961 [2024-11-04 02:27:31.879985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.961 [2024-11-04 02:27:31.879998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:44.961 [2024-11-04 02:27:31.880007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:44.961 [2024-11-04 02:27:31.880018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:44.961 [2024-11-04 02:27:31.880025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:44.961 [2024-11-04 02:27:31.880035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.961 [2024-11-04 02:27:31.880045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:44.961 [2024-11-04 02:27:31.880058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:16:44.961 [2024-11-04 02:27:31.880066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.961 [2024-11-04 02:27:31.880104] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:44.961 [2024-11-04 02:27:31.880114] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:49.171 [2024-11-04 02:27:35.637736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.637821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:49.171 [2024-11-04 02:27:35.637843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3757.615 ms 00:16:49.171 [2024-11-04 02:27:35.637857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.670385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.670447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.171 [2024-11-04 02:27:35.670468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.245 ms 00:16:49.171 [2024-11-04 02:27:35.670477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.670622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.670635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:49.171 [2024-11-04 02:27:35.670650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:49.171 [2024-11-04 02:27:35.670658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.724786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.724848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.171 [2024-11-04 02:27:35.724885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.089 ms 00:16:49.171 [2024-11-04 02:27:35.724895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.724940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.724949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.171 [2024-11-04 02:27:35.724962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:49.171 [2024-11-04 02:27:35.724974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.725590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.725643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.171 [2024-11-04 02:27:35.725656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:16:49.171 [2024-11-04 02:27:35.725665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.725792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.725803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.171 [2024-11-04 02:27:35.725817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:16:49.171 [2024-11-04 02:27:35.725828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.742015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.742060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.171 [2024-11-04 02:27:35.742074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.122 ms 00:16:49.171 [2024-11-04 02:27:35.742083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.755632] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:49.171 [2024-11-04 02:27:35.763616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.763901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:49.171 [2024-11-04 02:27:35.763923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.445 ms 00:16:49.171 [2024-11-04 02:27:35.763934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.865810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.865904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:49.171 [2024-11-04 02:27:35.865923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.841 ms 00:16:49.171 [2024-11-04 02:27:35.865935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.866147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.866168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:49.171 [2024-11-04 02:27:35.866178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:16:49.171 [2024-11-04 02:27:35.866189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.892419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.892479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:49.171 [2024-11-04 02:27:35.892493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.176 ms 00:16:49.171 [2024-11-04 02:27:35.892505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.917592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.917645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:49.171 [2024-11-04 02:27:35.917659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.037 ms 00:16:49.171 [2024-11-04 02:27:35.917670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:35.918316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:35.918343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:49.171 [2024-11-04 02:27:35.918354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:16:49.171 [2024-11-04 02:27:35.918364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:36.003543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:36.003799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:49.171 [2024-11-04 02:27:36.003824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.137 ms 00:16:49.171 [2024-11-04 02:27:36.003836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:36.032000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:36.032057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:49.171 [2024-11-04 02:27:36.032071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.051 ms 00:16:49.171 [2024-11-04 02:27:36.032082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:36.058471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:36.058687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:49.171 [2024-11-04 02:27:36.058709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.336 ms 00:16:49.171 [2024-11-04 02:27:36.058719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:36.085613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:36.085813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:49.171 [2024-11-04 02:27:36.085835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.812 ms 00:16:49.171 [2024-11-04 02:27:36.085845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:36.085910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:36.085929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:49.171 [2024-11-04 02:27:36.085939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:49.171 [2024-11-04 02:27:36.085949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:36.086055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.171 [2024-11-04 02:27:36.086071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:49.171 [2024-11-04 02:27:36.086080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:49.171 [2024-11-04 02:27:36.086092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.171 [2024-11-04 02:27:36.087408] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4225.413 ms, result 0 00:16:49.171 { 00:16:49.171 "name": "ftl0", 00:16:49.171 "uuid": "87c0b6e4-dcd7-479e-846a-febf53a2588d" 00:16:49.171 } 00:16:49.171 02:27:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:49.171 02:27:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:49.171 02:27:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:49.431 02:27:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:49.431 [2024-11-04 02:27:36.431385] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:49.431 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:49.431 Zero copy mechanism will not be used. 00:16:49.431 Running I/O for 4 seconds... 00:16:51.760 1007.00 IOPS, 66.87 MiB/s [2024-11-04T02:27:39.445Z] 1039.00 IOPS, 69.00 MiB/s [2024-11-04T02:27:40.828Z] 1002.33 IOPS, 66.56 MiB/s [2024-11-04T02:27:40.828Z] 1078.50 IOPS, 71.62 MiB/s 00:16:53.717 Latency(us) 00:16:53.717 [2024-11-04T02:27:40.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.717 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:53.717 ftl0 : 4.00 1077.80 71.57 0.00 0.00 983.55 225.28 3453.24 00:16:53.717 [2024-11-04T02:27:40.828Z] =================================================================================================================== 00:16:53.717 [2024-11-04T02:27:40.828Z] Total : 1077.80 71.57 0.00 0.00 983.55 225.28 3453.24 00:16:53.717 [2024-11-04 02:27:40.443911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:53.717 { 00:16:53.717 "results": [ 00:16:53.717 { 00:16:53.717 "job": "ftl0", 00:16:53.717 "core_mask": "0x1", 00:16:53.717 "workload": "randwrite", 00:16:53.717 "status": "finished", 00:16:53.717 "queue_depth": 1, 00:16:53.717 "io_size": 69632, 00:16:53.717 "runtime": 4.003524, 00:16:53.717 "iops": 1077.800457796681, 00:16:53.717 "mibps": 71.57268665056085, 00:16:53.717 "io_failed": 0, 00:16:53.717 "io_timeout": 0, 00:16:53.717 "avg_latency_us": 983.5546013013636, 00:16:53.717 "min_latency_us": 225.28, 00:16:53.717 "max_latency_us": 3453.243076923077 00:16:53.717 } 00:16:53.717 ], 00:16:53.717 "core_count": 1 00:16:53.717 } 00:16:53.717 02:27:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:53.717 [2024-11-04 02:27:40.560364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:53.717 Running I/O for 4 seconds... 00:16:55.612 8029.00 IOPS, 31.36 MiB/s [2024-11-04T02:27:43.710Z] 6776.50 IOPS, 26.47 MiB/s [2024-11-04T02:27:44.652Z] 6237.67 IOPS, 24.37 MiB/s [2024-11-04T02:27:44.652Z] 6019.00 IOPS, 23.51 MiB/s 00:16:57.541 Latency(us) 00:16:57.541 [2024-11-04T02:27:44.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.541 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.541 ftl0 : 4.03 6009.00 23.47 0.00 0.00 21226.40 233.16 46379.32 00:16:57.541 [2024-11-04T02:27:44.652Z] =================================================================================================================== 00:16:57.541 [2024-11-04T02:27:44.652Z] Total : 6009.00 23.47 0.00 0.00 21226.40 0.00 46379.32 00:16:57.541 [2024-11-04 02:27:44.604795] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:57.541 { 00:16:57.541 "results": [ 00:16:57.541 { 00:16:57.541 "job": "ftl0", 00:16:57.541 "core_mask": "0x1", 00:16:57.541 "workload": "randwrite", 00:16:57.541 "status": "finished", 00:16:57.541 "queue_depth": 128, 00:16:57.541 "io_size": 4096, 00:16:57.541 "runtime": 4.027959, 00:16:57.541 "iops": 6008.99860202152, 00:16:57.541 "mibps": 23.472650789146563, 00:16:57.541 "io_failed": 0, 00:16:57.541 "io_timeout": 0, 00:16:57.541 "avg_latency_us": 21226.39638966223, 00:16:57.541 "min_latency_us": 233.15692307692308, 00:16:57.541 "max_latency_us": 46379.32307692308 00:16:57.541 } 00:16:57.541 ], 00:16:57.541 "core_count": 1 00:16:57.541 } 00:16:57.541 02:27:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:57.802 [2024-11-04 02:27:44.711255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:57.802 Running I/O for 4 seconds... 00:16:59.691 4473.00 IOPS, 17.47 MiB/s [2024-11-04T02:27:47.745Z] 4400.50 IOPS, 17.19 MiB/s [2024-11-04T02:27:49.133Z] 4405.67 IOPS, 17.21 MiB/s [2024-11-04T02:27:49.133Z] 4432.50 IOPS, 17.31 MiB/s 00:17:02.022 Latency(us) 00:17:02.022 [2024-11-04T02:27:49.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.022 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:02.022 Verification LBA range: start 0x0 length 0x1400000 00:17:02.022 ftl0 : 4.01 4446.50 17.37 0.00 0.00 28705.04 363.91 43354.58 00:17:02.022 [2024-11-04T02:27:49.133Z] =================================================================================================================== 00:17:02.022 [2024-11-04T02:27:49.133Z] Total : 4446.50 17.37 0.00 0.00 28705.04 0.00 43354.58 00:17:02.022 [2024-11-04 02:27:48.741177] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:02.022 { 00:17:02.022 "results": [ 00:17:02.022 { 00:17:02.022 "job": "ftl0", 00:17:02.022 "core_mask": "0x1", 00:17:02.022 "workload": "verify", 00:17:02.022 "status": "finished", 00:17:02.022 "verify_range": { 00:17:02.022 "start": 0, 00:17:02.022 "length": 20971520 00:17:02.022 }, 00:17:02.022 "queue_depth": 128, 00:17:02.022 "io_size": 4096, 00:17:02.022 "runtime": 4.012819, 00:17:02.022 "iops": 4446.500078872235, 00:17:02.022 "mibps": 17.369140933094666, 00:17:02.022 "io_failed": 0, 00:17:02.022 "io_timeout": 0, 00:17:02.022 "avg_latency_us": 28705.037739255647, 00:17:02.022 "min_latency_us": 363.9138461538462, 00:17:02.023 "max_latency_us": 43354.584615384614 00:17:02.023 } 00:17:02.023 ], 00:17:02.023 "core_count": 1 00:17:02.023 } 00:17:02.023 02:27:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:02.023 [2024-11-04 02:27:48.957213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.023 [2024-11-04 02:27:48.957284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:02.023 [2024-11-04 02:27:48.957299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:02.023 [2024-11-04 02:27:48.957315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.023 [2024-11-04 02:27:48.957339] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.023 [2024-11-04 02:27:48.960634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.023 [2024-11-04 02:27:48.960846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:02.023 [2024-11-04 02:27:48.960892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:17:02.023 [2024-11-04 02:27:48.960902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.023 [2024-11-04 02:27:48.964118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.023 [2024-11-04 02:27:48.964169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:02.023 [2024-11-04 02:27:48.964184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:17:02.023 [2024-11-04 02:27:48.964193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.285 [2024-11-04 02:27:49.197817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.285 [2024-11-04 02:27:49.197897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:02.285 [2024-11-04 02:27:49.197918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 233.596 ms 00:17:02.285 [2024-11-04 02:27:49.197927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.285 [2024-11-04 02:27:49.204130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.285 [2024-11-04 02:27:49.204177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:02.285 [2024-11-04 02:27:49.204193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.149 ms 00:17:02.285 [2024-11-04 02:27:49.204202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.285 [2024-11-04 02:27:49.230998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.285 [2024-11-04 02:27:49.231217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:02.285 [2024-11-04 02:27:49.231248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.731 ms 00:17:02.285 [2024-11-04 02:27:49.231257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.285 [2024-11-04 02:27:49.250624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.285 [2024-11-04 02:27:49.250836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:02.285 [2024-11-04 02:27:49.250889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.314 ms 00:17:02.285 [2024-11-04 02:27:49.250903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.285 [2024-11-04 02:27:49.251099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.285 [2024-11-04 02:27:49.251115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:02.285 [2024-11-04 02:27:49.251133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:17:02.285 [2024-11-04 02:27:49.251141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.285 [2024-11-04 02:27:49.278156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.285 [2024-11-04 02:27:49.278204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:02.285 [2024-11-04 02:27:49.278220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.989 ms 00:17:02.285 [2024-11-04 02:27:49.278228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.285 [2024-11-04 02:27:49.304841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.285 [2024-11-04 02:27:49.304900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:02.285 [2024-11-04 02:27:49.304916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.552 ms 00:17:02.286 [2024-11-04 02:27:49.304924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.286 [2024-11-04 02:27:49.330953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.286 [2024-11-04 02:27:49.331004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:02.286 [2024-11-04 02:27:49.331020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.969 ms 00:17:02.286 [2024-11-04 02:27:49.331027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.286 [2024-11-04 02:27:49.357332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.286 [2024-11-04 02:27:49.357382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:02.286 [2024-11-04 02:27:49.357402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.186 ms 00:17:02.286 [2024-11-04 02:27:49.357410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.286 [2024-11-04 02:27:49.357463] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:02.286 [2024-11-04 02:27:49.357481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.357997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:02.286 [2024-11-04 02:27:49.358315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:02.287 [2024-11-04 02:27:49.358505] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:02.287 [2024-11-04 02:27:49.358516] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 87c0b6e4-dcd7-479e-846a-febf53a2588d 00:17:02.287 [2024-11-04 02:27:49.358525] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:02.287 [2024-11-04 02:27:49.358535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:02.287 [2024-11-04 02:27:49.358542] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:02.287 [2024-11-04 02:27:49.358552] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:02.287 [2024-11-04 02:27:49.358564] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:02.287 [2024-11-04 02:27:49.358576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:02.287 [2024-11-04 02:27:49.358583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:02.287 [2024-11-04 02:27:49.358594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:02.287 [2024-11-04 02:27:49.358600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:02.287 [2024-11-04 02:27:49.358610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.287 [2024-11-04 02:27:49.358618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:02.287 [2024-11-04 02:27:49.358631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:17:02.287 [2024-11-04 02:27:49.358638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.287 [2024-11-04 02:27:49.373367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.287 [2024-11-04 02:27:49.373411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:02.287 [2024-11-04 02:27:49.373430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.668 ms 00:17:02.287 [2024-11-04 02:27:49.373438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.287 [2024-11-04 02:27:49.373858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.287 [2024-11-04 02:27:49.373891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:02.287 [2024-11-04 02:27:49.373903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:17:02.287 [2024-11-04 02:27:49.373912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.416367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.416580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.549 [2024-11-04 02:27:49.416612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.416621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.416703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.416713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.549 [2024-11-04 02:27:49.416725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.416733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.416833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.416845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.549 [2024-11-04 02:27:49.416861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.416899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.416919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.416930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.549 [2024-11-04 02:27:49.416942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.416951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.508495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.508556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.549 [2024-11-04 02:27:49.508582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.508590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.583217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.583516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.549 [2024-11-04 02:27:49.583544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.583555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.583709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.583737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:02.549 [2024-11-04 02:27:49.583751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.583766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.583822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.583833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:02.549 [2024-11-04 02:27:49.583845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.583854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.584014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.584028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:02.549 [2024-11-04 02:27:49.584045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.584055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.584099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.584112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:02.549 [2024-11-04 02:27:49.584123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.584131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.584183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.584195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:02.549 [2024-11-04 02:27:49.584206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.584214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.584282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.549 [2024-11-04 02:27:49.584305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:02.549 [2024-11-04 02:27:49.584317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.549 [2024-11-04 02:27:49.584328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.549 [2024-11-04 02:27:49.584507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 627.226 ms, result 0 00:17:02.549 true 00:17:02.549 02:27:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73206 00:17:02.549 02:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 73206 ']' 00:17:02.549 02:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 73206 00:17:02.549 02:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:17:02.549 02:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:02.549 02:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73206 00:17:02.549 02:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:02.549 02:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:02.549 02:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73206' 00:17:02.549 killing process with pid 73206 00:17:02.549 02:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 73206 00:17:02.549 Received shutdown signal, test time was about 4.000000 seconds 00:17:02.549 00:17:02.549 Latency(us) 00:17:02.549 [2024-11-04T02:27:49.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.549 [2024-11-04T02:27:49.661Z] =================================================================================================================== 00:17:02.550 [2024-11-04T02:27:49.661Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:02.550 02:27:49 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 73206 00:17:10.706 Remove shared memory files 00:17:10.706 02:27:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:10.706 02:27:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:10.706 02:27:56 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:10.706 02:27:56 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:10.706 02:27:56 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:10.706 02:27:56 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:10.706 02:27:56 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:10.706 02:27:56 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:10.706 ************************************ 00:17:10.706 END TEST ftl_bdevperf 00:17:10.706 ************************************ 00:17:10.706 00:17:10.706 real 0m28.957s 00:17:10.706 user 0m31.511s 00:17:10.706 sys 0m1.172s 00:17:10.706 02:27:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:10.706 02:27:56 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:10.706 02:27:56 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:10.706 02:27:56 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:10.706 02:27:56 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:10.706 02:27:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:10.706 ************************************ 00:17:10.706 START TEST ftl_trim 00:17:10.706 ************************************ 00:17:10.706 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:10.706 * Looking for test storage... 00:17:10.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:10.706 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:10.706 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:10.706 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:17:10.706 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.706 02:27:56 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:10.706 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.706 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:10.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.706 --rc genhtml_branch_coverage=1 00:17:10.707 --rc genhtml_function_coverage=1 00:17:10.707 --rc genhtml_legend=1 00:17:10.707 --rc geninfo_all_blocks=1 00:17:10.707 --rc geninfo_unexecuted_blocks=1 00:17:10.707 00:17:10.707 ' 00:17:10.707 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:10.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.707 --rc genhtml_branch_coverage=1 00:17:10.707 --rc genhtml_function_coverage=1 00:17:10.707 --rc genhtml_legend=1 00:17:10.707 --rc geninfo_all_blocks=1 00:17:10.707 --rc geninfo_unexecuted_blocks=1 00:17:10.707 00:17:10.707 ' 00:17:10.707 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:10.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.707 --rc genhtml_branch_coverage=1 00:17:10.707 --rc genhtml_function_coverage=1 00:17:10.707 --rc genhtml_legend=1 00:17:10.707 --rc geninfo_all_blocks=1 00:17:10.707 --rc geninfo_unexecuted_blocks=1 00:17:10.707 00:17:10.707 ' 00:17:10.707 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:10.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.707 --rc genhtml_branch_coverage=1 00:17:10.707 --rc genhtml_function_coverage=1 00:17:10.707 --rc genhtml_legend=1 00:17:10.707 --rc geninfo_all_blocks=1 00:17:10.707 --rc geninfo_unexecuted_blocks=1 00:17:10.707 00:17:10.707 ' 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73576 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73576 00:17:10.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.707 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73576 ']' 00:17:10.707 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.707 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:10.707 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.707 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:10.707 02:27:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:10.707 02:27:56 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:10.707 [2024-11-04 02:27:56.946678] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:17:10.707 [2024-11-04 02:27:56.946839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73576 ] 00:17:10.707 [2024-11-04 02:27:57.116283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.707 [2024-11-04 02:27:57.269084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.707 [2024-11-04 02:27:57.269668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.707 [2024-11-04 02:27:57.269776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.969 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:10.969 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:10.969 02:27:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:10.969 02:27:58 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:10.969 02:27:58 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:10.969 02:27:58 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:10.969 02:27:58 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:10.969 02:27:58 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:11.543 02:27:58 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:11.543 02:27:58 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:11.543 02:27:58 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:11.543 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:11.543 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:11.543 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:11.543 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:11.543 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:11.543 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:11.543 { 00:17:11.543 "name": "nvme0n1", 00:17:11.543 "aliases": [ 00:17:11.543 "9b662387-df2d-49ba-8f0c-0a9a3ce1267d" 00:17:11.543 ], 00:17:11.543 "product_name": "NVMe disk", 00:17:11.543 "block_size": 4096, 00:17:11.543 "num_blocks": 1310720, 00:17:11.543 "uuid": "9b662387-df2d-49ba-8f0c-0a9a3ce1267d", 00:17:11.543 "numa_id": -1, 00:17:11.543 "assigned_rate_limits": { 00:17:11.543 "rw_ios_per_sec": 0, 00:17:11.543 "rw_mbytes_per_sec": 0, 00:17:11.543 "r_mbytes_per_sec": 0, 00:17:11.543 "w_mbytes_per_sec": 0 00:17:11.543 }, 00:17:11.543 "claimed": true, 00:17:11.543 "claim_type": "read_many_write_one", 00:17:11.543 "zoned": false, 00:17:11.543 "supported_io_types": { 00:17:11.543 "read": true, 00:17:11.543 "write": true, 00:17:11.543 "unmap": true, 00:17:11.543 "flush": true, 00:17:11.543 "reset": true, 00:17:11.543 "nvme_admin": true, 00:17:11.543 "nvme_io": true, 00:17:11.543 "nvme_io_md": false, 00:17:11.543 "write_zeroes": true, 00:17:11.543 "zcopy": false, 00:17:11.543 "get_zone_info": false, 00:17:11.543 "zone_management": false, 00:17:11.543 "zone_append": false, 00:17:11.543 "compare": true, 00:17:11.543 "compare_and_write": false, 00:17:11.543 "abort": true, 00:17:11.543 "seek_hole": false, 00:17:11.543 "seek_data": false, 00:17:11.543 "copy": true, 00:17:11.543 "nvme_iov_md": false 00:17:11.543 }, 00:17:11.543 "driver_specific": { 00:17:11.543 "nvme": [ 00:17:11.543 { 00:17:11.543 "pci_address": "0000:00:11.0", 00:17:11.543 "trid": { 00:17:11.543 "trtype": "PCIe", 00:17:11.543 "traddr": "0000:00:11.0" 00:17:11.543 }, 00:17:11.543 "ctrlr_data": { 00:17:11.543 "cntlid": 0, 00:17:11.543 "vendor_id": "0x1b36", 00:17:11.543 "model_number": "QEMU NVMe Ctrl", 00:17:11.543 "serial_number": "12341", 00:17:11.543 "firmware_revision": "8.0.0", 00:17:11.543 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:11.543 "oacs": { 00:17:11.543 "security": 0, 00:17:11.543 "format": 1, 00:17:11.543 "firmware": 0, 00:17:11.543 "ns_manage": 1 00:17:11.543 }, 00:17:11.543 "multi_ctrlr": false, 00:17:11.543 "ana_reporting": false 00:17:11.543 }, 00:17:11.543 "vs": { 00:17:11.543 "nvme_version": "1.4" 00:17:11.543 }, 00:17:11.543 "ns_data": { 00:17:11.543 "id": 1, 00:17:11.543 "can_share": false 00:17:11.543 } 00:17:11.543 } 00:17:11.543 ], 00:17:11.543 "mp_policy": "active_passive" 00:17:11.543 } 00:17:11.543 } 00:17:11.543 ]' 00:17:11.543 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:11.543 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:11.543 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:11.804 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:11.804 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:11.804 02:27:58 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:17:11.804 02:27:58 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:11.804 02:27:58 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:11.804 02:27:58 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:11.804 02:27:58 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:11.804 02:27:58 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:11.804 02:27:58 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c64443ee-5358-454c-a119-5df22f07c92a 00:17:11.804 02:27:58 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:11.804 02:27:58 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c64443ee-5358-454c-a119-5df22f07c92a 00:17:12.065 02:27:59 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:12.324 02:27:59 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=5516dc46-f9ca-4d88-8075-da5f44f4df17 00:17:12.324 02:27:59 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5516dc46-f9ca-4d88-8075-da5f44f4df17 00:17:12.582 02:27:59 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=21555391-42c6-4102-85ff-de97110b586f 00:17:12.582 02:27:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 21555391-42c6-4102-85ff-de97110b586f 00:17:12.582 02:27:59 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:12.582 02:27:59 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:12.582 02:27:59 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=21555391-42c6-4102-85ff-de97110b586f 00:17:12.582 02:27:59 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:12.582 02:27:59 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 21555391-42c6-4102-85ff-de97110b586f 00:17:12.582 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=21555391-42c6-4102-85ff-de97110b586f 00:17:12.582 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:12.582 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:12.582 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:12.582 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 21555391-42c6-4102-85ff-de97110b586f 00:17:12.840 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:12.840 { 00:17:12.840 "name": "21555391-42c6-4102-85ff-de97110b586f", 00:17:12.840 "aliases": [ 00:17:12.840 "lvs/nvme0n1p0" 00:17:12.840 ], 00:17:12.840 "product_name": "Logical Volume", 00:17:12.840 "block_size": 4096, 00:17:12.840 "num_blocks": 26476544, 00:17:12.840 "uuid": "21555391-42c6-4102-85ff-de97110b586f", 00:17:12.840 "assigned_rate_limits": { 00:17:12.840 "rw_ios_per_sec": 0, 00:17:12.840 "rw_mbytes_per_sec": 0, 00:17:12.840 "r_mbytes_per_sec": 0, 00:17:12.840 "w_mbytes_per_sec": 0 00:17:12.840 }, 00:17:12.840 "claimed": false, 00:17:12.840 "zoned": false, 00:17:12.840 "supported_io_types": { 00:17:12.840 "read": true, 00:17:12.840 "write": true, 00:17:12.840 "unmap": true, 00:17:12.840 "flush": false, 00:17:12.840 "reset": true, 00:17:12.840 "nvme_admin": false, 00:17:12.840 "nvme_io": false, 00:17:12.840 "nvme_io_md": false, 00:17:12.840 "write_zeroes": true, 00:17:12.840 "zcopy": false, 00:17:12.840 "get_zone_info": false, 00:17:12.840 "zone_management": false, 00:17:12.840 "zone_append": false, 00:17:12.840 "compare": false, 00:17:12.840 "compare_and_write": false, 00:17:12.840 "abort": false, 00:17:12.840 "seek_hole": true, 00:17:12.840 "seek_data": true, 00:17:12.840 "copy": false, 00:17:12.840 "nvme_iov_md": false 00:17:12.840 }, 00:17:12.840 "driver_specific": { 00:17:12.840 "lvol": { 00:17:12.840 "lvol_store_uuid": "5516dc46-f9ca-4d88-8075-da5f44f4df17", 00:17:12.840 "base_bdev": "nvme0n1", 00:17:12.840 "thin_provision": true, 00:17:12.840 "num_allocated_clusters": 0, 00:17:12.840 "snapshot": false, 00:17:12.840 "clone": false, 00:17:12.840 "esnap_clone": false 00:17:12.840 } 00:17:12.840 } 00:17:12.840 } 00:17:12.840 ]' 00:17:12.840 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:12.840 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:12.840 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:12.840 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:12.840 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:12.840 02:27:59 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:12.840 02:27:59 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:12.840 02:27:59 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:12.840 02:27:59 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:13.098 02:28:00 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:13.098 02:28:00 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:13.098 02:28:00 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 21555391-42c6-4102-85ff-de97110b586f 00:17:13.098 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=21555391-42c6-4102-85ff-de97110b586f 00:17:13.098 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:13.098 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:13.098 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:13.099 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 21555391-42c6-4102-85ff-de97110b586f 00:17:13.357 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:13.357 { 00:17:13.357 "name": "21555391-42c6-4102-85ff-de97110b586f", 00:17:13.357 "aliases": [ 00:17:13.357 "lvs/nvme0n1p0" 00:17:13.357 ], 00:17:13.357 "product_name": "Logical Volume", 00:17:13.357 "block_size": 4096, 00:17:13.357 "num_blocks": 26476544, 00:17:13.357 "uuid": "21555391-42c6-4102-85ff-de97110b586f", 00:17:13.357 "assigned_rate_limits": { 00:17:13.357 "rw_ios_per_sec": 0, 00:17:13.357 "rw_mbytes_per_sec": 0, 00:17:13.357 "r_mbytes_per_sec": 0, 00:17:13.357 "w_mbytes_per_sec": 0 00:17:13.357 }, 00:17:13.357 "claimed": false, 00:17:13.357 "zoned": false, 00:17:13.357 "supported_io_types": { 00:17:13.357 "read": true, 00:17:13.357 "write": true, 00:17:13.357 "unmap": true, 00:17:13.357 "flush": false, 00:17:13.357 "reset": true, 00:17:13.357 "nvme_admin": false, 00:17:13.357 "nvme_io": false, 00:17:13.357 "nvme_io_md": false, 00:17:13.357 "write_zeroes": true, 00:17:13.357 "zcopy": false, 00:17:13.357 "get_zone_info": false, 00:17:13.357 "zone_management": false, 00:17:13.357 "zone_append": false, 00:17:13.357 "compare": false, 00:17:13.357 "compare_and_write": false, 00:17:13.357 "abort": false, 00:17:13.357 "seek_hole": true, 00:17:13.357 "seek_data": true, 00:17:13.357 "copy": false, 00:17:13.357 "nvme_iov_md": false 00:17:13.357 }, 00:17:13.357 "driver_specific": { 00:17:13.357 "lvol": { 00:17:13.357 "lvol_store_uuid": "5516dc46-f9ca-4d88-8075-da5f44f4df17", 00:17:13.357 "base_bdev": "nvme0n1", 00:17:13.357 "thin_provision": true, 00:17:13.357 "num_allocated_clusters": 0, 00:17:13.357 "snapshot": false, 00:17:13.357 "clone": false, 00:17:13.357 "esnap_clone": false 00:17:13.357 } 00:17:13.357 } 00:17:13.357 } 00:17:13.357 ]' 00:17:13.357 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:13.357 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:13.357 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:13.357 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:13.357 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:13.357 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:13.357 02:28:00 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:13.357 02:28:00 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:13.615 02:28:00 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:13.615 02:28:00 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:13.615 02:28:00 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 21555391-42c6-4102-85ff-de97110b586f 00:17:13.615 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=21555391-42c6-4102-85ff-de97110b586f 00:17:13.615 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:13.615 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:13.615 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:13.615 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 21555391-42c6-4102-85ff-de97110b586f 00:17:13.615 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:13.615 { 00:17:13.615 "name": "21555391-42c6-4102-85ff-de97110b586f", 00:17:13.615 "aliases": [ 00:17:13.615 "lvs/nvme0n1p0" 00:17:13.615 ], 00:17:13.615 "product_name": "Logical Volume", 00:17:13.615 "block_size": 4096, 00:17:13.615 "num_blocks": 26476544, 00:17:13.615 "uuid": "21555391-42c6-4102-85ff-de97110b586f", 00:17:13.615 "assigned_rate_limits": { 00:17:13.615 "rw_ios_per_sec": 0, 00:17:13.615 "rw_mbytes_per_sec": 0, 00:17:13.615 "r_mbytes_per_sec": 0, 00:17:13.615 "w_mbytes_per_sec": 0 00:17:13.615 }, 00:17:13.615 "claimed": false, 00:17:13.615 "zoned": false, 00:17:13.615 "supported_io_types": { 00:17:13.615 "read": true, 00:17:13.615 "write": true, 00:17:13.615 "unmap": true, 00:17:13.615 "flush": false, 00:17:13.615 "reset": true, 00:17:13.615 "nvme_admin": false, 00:17:13.615 "nvme_io": false, 00:17:13.615 "nvme_io_md": false, 00:17:13.615 "write_zeroes": true, 00:17:13.615 "zcopy": false, 00:17:13.615 "get_zone_info": false, 00:17:13.615 "zone_management": false, 00:17:13.615 "zone_append": false, 00:17:13.615 "compare": false, 00:17:13.615 "compare_and_write": false, 00:17:13.615 "abort": false, 00:17:13.615 "seek_hole": true, 00:17:13.615 "seek_data": true, 00:17:13.615 "copy": false, 00:17:13.615 "nvme_iov_md": false 00:17:13.615 }, 00:17:13.615 "driver_specific": { 00:17:13.615 "lvol": { 00:17:13.616 "lvol_store_uuid": "5516dc46-f9ca-4d88-8075-da5f44f4df17", 00:17:13.616 "base_bdev": "nvme0n1", 00:17:13.616 "thin_provision": true, 00:17:13.616 "num_allocated_clusters": 0, 00:17:13.616 "snapshot": false, 00:17:13.616 "clone": false, 00:17:13.616 "esnap_clone": false 00:17:13.616 } 00:17:13.616 } 00:17:13.616 } 00:17:13.616 ]' 00:17:13.616 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:13.874 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:13.874 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:13.874 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:13.874 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:13.874 02:28:00 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:13.874 02:28:00 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:13.874 02:28:00 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 21555391-42c6-4102-85ff-de97110b586f -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:13.874 [2024-11-04 02:28:00.975909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.874 [2024-11-04 02:28:00.975948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:13.874 [2024-11-04 02:28:00.975961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:13.874 [2024-11-04 02:28:00.975968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.874 [2024-11-04 02:28:00.978316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.874 [2024-11-04 02:28:00.978345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:13.874 [2024-11-04 02:28:00.978356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.327 ms 00:17:13.874 [2024-11-04 02:28:00.978363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.874 [2024-11-04 02:28:00.978436] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:13.874 [2024-11-04 02:28:00.978998] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:13.874 [2024-11-04 02:28:00.979035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.874 [2024-11-04 02:28:00.979042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:13.874 [2024-11-04 02:28:00.979050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:17:13.874 [2024-11-04 02:28:00.979057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.874 [2024-11-04 02:28:00.979214] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c6f3f3b2-f208-412f-922b-b47e9684c98b 00:17:13.874 [2024-11-04 02:28:00.980474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.874 [2024-11-04 02:28:00.980506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:13.874 [2024-11-04 02:28:00.980515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:13.874 [2024-11-04 02:28:00.980523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.133 [2024-11-04 02:28:00.987210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.133 [2024-11-04 02:28:00.987239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:14.133 [2024-11-04 02:28:00.987247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.619 ms 00:17:14.133 [2024-11-04 02:28:00.987256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.133 [2024-11-04 02:28:00.987357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.133 [2024-11-04 02:28:00.987367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:14.133 [2024-11-04 02:28:00.987375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:14.133 [2024-11-04 02:28:00.987386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.133 [2024-11-04 02:28:00.987414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.133 [2024-11-04 02:28:00.987423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:14.133 [2024-11-04 02:28:00.987429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:14.133 [2024-11-04 02:28:00.987436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.133 [2024-11-04 02:28:00.987468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:14.133 [2024-11-04 02:28:00.990680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.134 [2024-11-04 02:28:00.990706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:14.134 [2024-11-04 02:28:00.990717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:17:14.134 [2024-11-04 02:28:00.990723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.134 [2024-11-04 02:28:00.990778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.134 [2024-11-04 02:28:00.990787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:14.134 [2024-11-04 02:28:00.990795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:14.134 [2024-11-04 02:28:00.990814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.134 [2024-11-04 02:28:00.990840] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:14.134 [2024-11-04 02:28:00.990966] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:14.134 [2024-11-04 02:28:00.990986] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:14.134 [2024-11-04 02:28:00.990997] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:14.134 [2024-11-04 02:28:00.991007] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:14.134 [2024-11-04 02:28:00.991015] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:14.134 [2024-11-04 02:28:00.991024] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:14.134 [2024-11-04 02:28:00.991030] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:14.134 [2024-11-04 02:28:00.991038] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:14.134 [2024-11-04 02:28:00.991045] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:14.134 [2024-11-04 02:28:00.991054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.134 [2024-11-04 02:28:00.991061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:14.134 [2024-11-04 02:28:00.991070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:17:14.134 [2024-11-04 02:28:00.991076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.134 [2024-11-04 02:28:00.991155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.134 [2024-11-04 02:28:00.991162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:14.134 [2024-11-04 02:28:00.991170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:14.134 [2024-11-04 02:28:00.991175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.134 [2024-11-04 02:28:00.991282] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:14.134 [2024-11-04 02:28:00.991296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:14.134 [2024-11-04 02:28:00.991306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.134 [2024-11-04 02:28:00.991312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:14.134 [2024-11-04 02:28:00.991327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:14.134 [2024-11-04 02:28:00.991339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:14.134 [2024-11-04 02:28:00.991346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.134 [2024-11-04 02:28:00.991358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:14.134 [2024-11-04 02:28:00.991363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:14.134 [2024-11-04 02:28:00.991370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.134 [2024-11-04 02:28:00.991375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:14.134 [2024-11-04 02:28:00.991382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:14.134 [2024-11-04 02:28:00.991387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:14.134 [2024-11-04 02:28:00.991400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:14.134 [2024-11-04 02:28:00.991407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:14.134 [2024-11-04 02:28:00.991421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.134 [2024-11-04 02:28:00.991434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:14.134 [2024-11-04 02:28:00.991440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.134 [2024-11-04 02:28:00.991451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:14.134 [2024-11-04 02:28:00.991458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.134 [2024-11-04 02:28:00.991471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:14.134 [2024-11-04 02:28:00.991476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.134 [2024-11-04 02:28:00.991487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:14.134 [2024-11-04 02:28:00.991495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.134 [2024-11-04 02:28:00.991507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:14.134 [2024-11-04 02:28:00.991511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:14.134 [2024-11-04 02:28:00.991518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.134 [2024-11-04 02:28:00.991523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:14.134 [2024-11-04 02:28:00.991530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:14.134 [2024-11-04 02:28:00.991535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:14.134 [2024-11-04 02:28:00.991547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:14.134 [2024-11-04 02:28:00.991556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991560] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:14.134 [2024-11-04 02:28:00.991568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:14.134 [2024-11-04 02:28:00.991575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.134 [2024-11-04 02:28:00.991582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.134 [2024-11-04 02:28:00.991588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:14.134 [2024-11-04 02:28:00.991597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:14.134 [2024-11-04 02:28:00.991603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:14.134 [2024-11-04 02:28:00.991610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:14.134 [2024-11-04 02:28:00.991614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:14.134 [2024-11-04 02:28:00.991621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:14.134 [2024-11-04 02:28:00.991629] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:14.134 [2024-11-04 02:28:00.991640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.134 [2024-11-04 02:28:00.991647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:14.134 [2024-11-04 02:28:00.991654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:14.134 [2024-11-04 02:28:00.991660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:14.134 [2024-11-04 02:28:00.991669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:14.134 [2024-11-04 02:28:00.991675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:14.134 [2024-11-04 02:28:00.991682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:14.134 [2024-11-04 02:28:00.991687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:14.134 [2024-11-04 02:28:00.991695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:14.134 [2024-11-04 02:28:00.991700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:14.134 [2024-11-04 02:28:00.991709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:14.134 [2024-11-04 02:28:00.991724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:14.134 [2024-11-04 02:28:00.991732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:14.134 [2024-11-04 02:28:00.991738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:14.134 [2024-11-04 02:28:00.991745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:14.134 [2024-11-04 02:28:00.991751] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:14.135 [2024-11-04 02:28:00.991760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.135 [2024-11-04 02:28:00.991766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:14.135 [2024-11-04 02:28:00.991775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:14.135 [2024-11-04 02:28:00.991781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:14.135 [2024-11-04 02:28:00.991788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:14.135 [2024-11-04 02:28:00.991794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.135 [2024-11-04 02:28:00.991806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:14.135 [2024-11-04 02:28:00.991812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:17:14.135 [2024-11-04 02:28:00.991820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.135 [2024-11-04 02:28:00.991917] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:14.135 [2024-11-04 02:28:00.991937] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:16.663 [2024-11-04 02:28:03.549764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.549828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:16.663 [2024-11-04 02:28:03.549848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2557.835 ms 00:17:16.663 [2024-11-04 02:28:03.549859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.663 [2024-11-04 02:28:03.578243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.578290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:16.663 [2024-11-04 02:28:03.578304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.116 ms 00:17:16.663 [2024-11-04 02:28:03.578314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.663 [2024-11-04 02:28:03.578448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.578466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:16.663 [2024-11-04 02:28:03.578476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:16.663 [2024-11-04 02:28:03.578488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.663 [2024-11-04 02:28:03.619947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.619991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:16.663 [2024-11-04 02:28:03.620008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.404 ms 00:17:16.663 [2024-11-04 02:28:03.620018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.663 [2024-11-04 02:28:03.620099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.620113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:16.663 [2024-11-04 02:28:03.620123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:16.663 [2024-11-04 02:28:03.620132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.663 [2024-11-04 02:28:03.620564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.620597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:16.663 [2024-11-04 02:28:03.620611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:17:16.663 [2024-11-04 02:28:03.620623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.663 [2024-11-04 02:28:03.620791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.620813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:16.663 [2024-11-04 02:28:03.620825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:17:16.663 [2024-11-04 02:28:03.620841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.663 [2024-11-04 02:28:03.639400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.639435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:16.663 [2024-11-04 02:28:03.639445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.483 ms 00:17:16.663 [2024-11-04 02:28:03.639455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.663 [2024-11-04 02:28:03.651640] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:16.663 [2024-11-04 02:28:03.668892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.668925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:16.663 [2024-11-04 02:28:03.668938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.318 ms 00:17:16.663 [2024-11-04 02:28:03.668949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.663 [2024-11-04 02:28:03.739889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.739931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:16.663 [2024-11-04 02:28:03.739947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.862 ms 00:17:16.663 [2024-11-04 02:28:03.739958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.663 [2024-11-04 02:28:03.740188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.740200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:16.663 [2024-11-04 02:28:03.740214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:17:16.663 [2024-11-04 02:28:03.740222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.663 [2024-11-04 02:28:03.763041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.663 [2024-11-04 02:28:03.763075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:16.663 [2024-11-04 02:28:03.763091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.781 ms 00:17:16.663 [2024-11-04 02:28:03.763099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.923 [2024-11-04 02:28:03.785309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.924 [2024-11-04 02:28:03.785340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:16.924 [2024-11-04 02:28:03.785353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.151 ms 00:17:16.924 [2024-11-04 02:28:03.785360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.924 [2024-11-04 02:28:03.785977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.924 [2024-11-04 02:28:03.786000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:16.924 [2024-11-04 02:28:03.786011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:17:16.924 [2024-11-04 02:28:03.786018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.924 [2024-11-04 02:28:03.853924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.924 [2024-11-04 02:28:03.853957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:16.924 [2024-11-04 02:28:03.853972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.867 ms 00:17:16.924 [2024-11-04 02:28:03.853982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.924 [2024-11-04 02:28:03.878584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.924 [2024-11-04 02:28:03.878619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:16.924 [2024-11-04 02:28:03.878632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.523 ms 00:17:16.924 [2024-11-04 02:28:03.878639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.924 [2024-11-04 02:28:03.901392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.924 [2024-11-04 02:28:03.901427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:16.924 [2024-11-04 02:28:03.901438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.702 ms 00:17:16.924 [2024-11-04 02:28:03.901445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.924 [2024-11-04 02:28:03.924507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.924 [2024-11-04 02:28:03.924541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:16.924 [2024-11-04 02:28:03.924555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.004 ms 00:17:16.924 [2024-11-04 02:28:03.924575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.924 [2024-11-04 02:28:03.924623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.924 [2024-11-04 02:28:03.924633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:16.924 [2024-11-04 02:28:03.924646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:16.924 [2024-11-04 02:28:03.924656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.924 [2024-11-04 02:28:03.924741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.924 [2024-11-04 02:28:03.924750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:16.924 [2024-11-04 02:28:03.924760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:16.924 [2024-11-04 02:28:03.924768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.924 [2024-11-04 02:28:03.925708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:16.924 [2024-11-04 02:28:03.928727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2949.503 ms, result 0 00:17:16.924 [2024-11-04 02:28:03.929599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:16.924 { 00:17:16.924 "name": "ftl0", 00:17:16.924 "uuid": "c6f3f3b2-f208-412f-922b-b47e9684c98b" 00:17:16.924 } 00:17:16.924 02:28:03 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:16.924 02:28:03 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:17:16.924 02:28:03 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:16.924 02:28:03 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:17:16.924 02:28:03 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:16.924 02:28:03 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:16.924 02:28:03 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:17.210 02:28:04 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:17.472 [ 00:17:17.472 { 00:17:17.472 "name": "ftl0", 00:17:17.472 "aliases": [ 00:17:17.472 "c6f3f3b2-f208-412f-922b-b47e9684c98b" 00:17:17.472 ], 00:17:17.472 "product_name": "FTL disk", 00:17:17.472 "block_size": 4096, 00:17:17.472 "num_blocks": 23592960, 00:17:17.472 "uuid": "c6f3f3b2-f208-412f-922b-b47e9684c98b", 00:17:17.472 "assigned_rate_limits": { 00:17:17.472 "rw_ios_per_sec": 0, 00:17:17.472 "rw_mbytes_per_sec": 0, 00:17:17.472 "r_mbytes_per_sec": 0, 00:17:17.472 "w_mbytes_per_sec": 0 00:17:17.472 }, 00:17:17.472 "claimed": false, 00:17:17.472 "zoned": false, 00:17:17.472 "supported_io_types": { 00:17:17.472 "read": true, 00:17:17.472 "write": true, 00:17:17.472 "unmap": true, 00:17:17.472 "flush": true, 00:17:17.472 "reset": false, 00:17:17.472 "nvme_admin": false, 00:17:17.472 "nvme_io": false, 00:17:17.472 "nvme_io_md": false, 00:17:17.472 "write_zeroes": true, 00:17:17.472 "zcopy": false, 00:17:17.472 "get_zone_info": false, 00:17:17.472 "zone_management": false, 00:17:17.472 "zone_append": false, 00:17:17.472 "compare": false, 00:17:17.472 "compare_and_write": false, 00:17:17.472 "abort": false, 00:17:17.472 "seek_hole": false, 00:17:17.472 "seek_data": false, 00:17:17.472 "copy": false, 00:17:17.472 "nvme_iov_md": false 00:17:17.472 }, 00:17:17.472 "driver_specific": { 00:17:17.472 "ftl": { 00:17:17.472 "base_bdev": "21555391-42c6-4102-85ff-de97110b586f", 00:17:17.472 "cache": "nvc0n1p0" 00:17:17.472 } 00:17:17.472 } 00:17:17.472 } 00:17:17.472 ] 00:17:17.472 02:28:04 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:17:17.472 02:28:04 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:17.472 02:28:04 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:17.472 02:28:04 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:17.472 02:28:04 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:17.730 02:28:04 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:17.730 { 00:17:17.730 "name": "ftl0", 00:17:17.730 "aliases": [ 00:17:17.730 "c6f3f3b2-f208-412f-922b-b47e9684c98b" 00:17:17.730 ], 00:17:17.730 "product_name": "FTL disk", 00:17:17.730 "block_size": 4096, 00:17:17.730 "num_blocks": 23592960, 00:17:17.730 "uuid": "c6f3f3b2-f208-412f-922b-b47e9684c98b", 00:17:17.730 "assigned_rate_limits": { 00:17:17.730 "rw_ios_per_sec": 0, 00:17:17.730 "rw_mbytes_per_sec": 0, 00:17:17.730 "r_mbytes_per_sec": 0, 00:17:17.730 "w_mbytes_per_sec": 0 00:17:17.730 }, 00:17:17.730 "claimed": false, 00:17:17.730 "zoned": false, 00:17:17.730 "supported_io_types": { 00:17:17.730 "read": true, 00:17:17.730 "write": true, 00:17:17.730 "unmap": true, 00:17:17.730 "flush": true, 00:17:17.730 "reset": false, 00:17:17.730 "nvme_admin": false, 00:17:17.730 "nvme_io": false, 00:17:17.730 "nvme_io_md": false, 00:17:17.730 "write_zeroes": true, 00:17:17.730 "zcopy": false, 00:17:17.730 "get_zone_info": false, 00:17:17.730 "zone_management": false, 00:17:17.730 "zone_append": false, 00:17:17.730 "compare": false, 00:17:17.730 "compare_and_write": false, 00:17:17.730 "abort": false, 00:17:17.730 "seek_hole": false, 00:17:17.730 "seek_data": false, 00:17:17.730 "copy": false, 00:17:17.730 "nvme_iov_md": false 00:17:17.730 }, 00:17:17.730 "driver_specific": { 00:17:17.730 "ftl": { 00:17:17.730 "base_bdev": "21555391-42c6-4102-85ff-de97110b586f", 00:17:17.730 "cache": "nvc0n1p0" 00:17:17.730 } 00:17:17.730 } 00:17:17.730 } 00:17:17.730 ]' 00:17:17.730 02:28:04 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:17.730 02:28:04 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:17.730 02:28:04 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:17.988 [2024-11-04 02:28:04.960953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.988 [2024-11-04 02:28:04.961001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:17.988 [2024-11-04 02:28:04.961016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:17.988 [2024-11-04 02:28:04.961027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.988 [2024-11-04 02:28:04.961063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:17.988 [2024-11-04 02:28:04.963836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.988 [2024-11-04 02:28:04.963874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:17.988 [2024-11-04 02:28:04.963896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.753 ms 00:17:17.988 [2024-11-04 02:28:04.963904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.988 [2024-11-04 02:28:04.964486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.988 [2024-11-04 02:28:04.964501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:17.988 [2024-11-04 02:28:04.964511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:17:17.988 [2024-11-04 02:28:04.964519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.988 [2024-11-04 02:28:04.968175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.988 [2024-11-04 02:28:04.968198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:17.988 [2024-11-04 02:28:04.968209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:17:17.988 [2024-11-04 02:28:04.968219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.988 [2024-11-04 02:28:04.975168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.988 [2024-11-04 02:28:04.975198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:17.988 [2024-11-04 02:28:04.975211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:17:17.988 [2024-11-04 02:28:04.975219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.988 [2024-11-04 02:28:04.999215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.988 [2024-11-04 02:28:04.999249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:17.988 [2024-11-04 02:28:04.999265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.902 ms 00:17:17.988 [2024-11-04 02:28:04.999272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.988 [2024-11-04 02:28:05.014346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.988 [2024-11-04 02:28:05.014383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:17.988 [2024-11-04 02:28:05.014397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.009 ms 00:17:17.988 [2024-11-04 02:28:05.014405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.988 [2024-11-04 02:28:05.014622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.988 [2024-11-04 02:28:05.014635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:17.988 [2024-11-04 02:28:05.014645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:17.988 [2024-11-04 02:28:05.014653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.988 [2024-11-04 02:28:05.037148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.988 [2024-11-04 02:28:05.037180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:17.988 [2024-11-04 02:28:05.037192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.462 ms 00:17:17.988 [2024-11-04 02:28:05.037199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.988 [2024-11-04 02:28:05.059543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.989 [2024-11-04 02:28:05.059573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:17.989 [2024-11-04 02:28:05.059586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.280 ms 00:17:17.989 [2024-11-04 02:28:05.059593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.989 [2024-11-04 02:28:05.081726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.989 [2024-11-04 02:28:05.081756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:17.989 [2024-11-04 02:28:05.081768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.072 ms 00:17:17.989 [2024-11-04 02:28:05.081774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.248 [2024-11-04 02:28:05.103639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.248 [2024-11-04 02:28:05.103670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:18.248 [2024-11-04 02:28:05.103681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.743 ms 00:17:18.248 [2024-11-04 02:28:05.103688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.248 [2024-11-04 02:28:05.103748] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:18.248 [2024-11-04 02:28:05.103763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.103991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:18.248 [2024-11-04 02:28:05.104350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:18.249 [2024-11-04 02:28:05.104654] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:18.249 [2024-11-04 02:28:05.104665] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6f3f3b2-f208-412f-922b-b47e9684c98b 00:17:18.249 [2024-11-04 02:28:05.104675] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:18.249 [2024-11-04 02:28:05.104683] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:18.249 [2024-11-04 02:28:05.104691] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:18.249 [2024-11-04 02:28:05.104700] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:18.249 [2024-11-04 02:28:05.104707] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:18.249 [2024-11-04 02:28:05.104717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:18.249 [2024-11-04 02:28:05.104726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:18.249 [2024-11-04 02:28:05.104735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:18.249 [2024-11-04 02:28:05.104741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:18.249 [2024-11-04 02:28:05.104752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.249 [2024-11-04 02:28:05.104759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:18.249 [2024-11-04 02:28:05.104769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:17:18.249 [2024-11-04 02:28:05.104776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.117654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.249 [2024-11-04 02:28:05.117682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:18.249 [2024-11-04 02:28:05.117697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.844 ms 00:17:18.249 [2024-11-04 02:28:05.117704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.118114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.249 [2024-11-04 02:28:05.118132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:18.249 [2024-11-04 02:28:05.118142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:17:18.249 [2024-11-04 02:28:05.118150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.164314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.164349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.249 [2024-11-04 02:28:05.164360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.249 [2024-11-04 02:28:05.164371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.164476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.164486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.249 [2024-11-04 02:28:05.164496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.249 [2024-11-04 02:28:05.164504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.164565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.164574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.249 [2024-11-04 02:28:05.164586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.249 [2024-11-04 02:28:05.164593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.164628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.164636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.249 [2024-11-04 02:28:05.164646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.249 [2024-11-04 02:28:05.164652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.249384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.249425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.249 [2024-11-04 02:28:05.249438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.249 [2024-11-04 02:28:05.249446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.314803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.314842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.249 [2024-11-04 02:28:05.314855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.249 [2024-11-04 02:28:05.314862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.314970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.314981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.249 [2024-11-04 02:28:05.315008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.249 [2024-11-04 02:28:05.315017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.315079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.315090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.249 [2024-11-04 02:28:05.315099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.249 [2024-11-04 02:28:05.315107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.315226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.315237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.249 [2024-11-04 02:28:05.315247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.249 [2024-11-04 02:28:05.315254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.315308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.315322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:18.249 [2024-11-04 02:28:05.315335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.249 [2024-11-04 02:28:05.315343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.315400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.315415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.249 [2024-11-04 02:28:05.315430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.249 [2024-11-04 02:28:05.315438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.249 [2024-11-04 02:28:05.315499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.249 [2024-11-04 02:28:05.315517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.250 [2024-11-04 02:28:05.315527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.250 [2024-11-04 02:28:05.315535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.250 [2024-11-04 02:28:05.315726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.744 ms, result 0 00:17:18.250 true 00:17:18.250 02:28:05 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73576 00:17:18.250 02:28:05 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73576 ']' 00:17:18.250 02:28:05 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73576 00:17:18.250 02:28:05 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:18.250 02:28:05 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:18.250 02:28:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73576 00:17:18.508 02:28:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:18.508 02:28:05 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:18.508 killing process with pid 73576 00:17:18.508 02:28:05 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73576' 00:17:18.508 02:28:05 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73576 00:17:18.508 02:28:05 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73576 00:17:25.076 02:28:11 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:26.015 65536+0 records in 00:17:26.015 65536+0 records out 00:17:26.015 268435456 bytes (268 MB, 256 MiB) copied, 1.11623 s, 240 MB/s 00:17:26.015 02:28:12 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:26.015 [2024-11-04 02:28:12.886745] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:17:26.015 [2024-11-04 02:28:12.887607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73764 ] 00:17:26.015 [2024-11-04 02:28:13.050345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.273 [2024-11-04 02:28:13.138593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.273 [2024-11-04 02:28:13.343078] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.273 [2024-11-04 02:28:13.343124] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.533 [2024-11-04 02:28:13.490970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.491010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.533 [2024-11-04 02:28:13.491020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.533 [2024-11-04 02:28:13.491027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.493064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.493093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.533 [2024-11-04 02:28:13.493101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:17:26.533 [2024-11-04 02:28:13.493106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.493159] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.533 [2024-11-04 02:28:13.493683] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.533 [2024-11-04 02:28:13.493706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.493713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.533 [2024-11-04 02:28:13.493719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:17:26.533 [2024-11-04 02:28:13.493725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.494665] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:26.533 [2024-11-04 02:28:13.504239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.504266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:26.533 [2024-11-04 02:28:13.504276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.576 ms 00:17:26.533 [2024-11-04 02:28:13.504283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.504342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.504351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:26.533 [2024-11-04 02:28:13.504357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:26.533 [2024-11-04 02:28:13.504362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.508538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.508565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.533 [2024-11-04 02:28:13.508572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.147 ms 00:17:26.533 [2024-11-04 02:28:13.508578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.508650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.508658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.533 [2024-11-04 02:28:13.508664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:26.533 [2024-11-04 02:28:13.508670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.508686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.508692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.533 [2024-11-04 02:28:13.508700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.533 [2024-11-04 02:28:13.508705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.508722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:26.533 [2024-11-04 02:28:13.511275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.511298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.533 [2024-11-04 02:28:13.511305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:17:26.533 [2024-11-04 02:28:13.511311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.511340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.511347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.533 [2024-11-04 02:28:13.511353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:26.533 [2024-11-04 02:28:13.511359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.511372] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:26.533 [2024-11-04 02:28:13.511386] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:26.533 [2024-11-04 02:28:13.511414] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:26.533 [2024-11-04 02:28:13.511425] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:26.533 [2024-11-04 02:28:13.511502] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.533 [2024-11-04 02:28:13.511511] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.533 [2024-11-04 02:28:13.511519] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.533 [2024-11-04 02:28:13.511526] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.533 [2024-11-04 02:28:13.511534] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.533 [2024-11-04 02:28:13.511542] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:26.533 [2024-11-04 02:28:13.511548] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.533 [2024-11-04 02:28:13.511553] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.533 [2024-11-04 02:28:13.511558] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.533 [2024-11-04 02:28:13.511564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.511570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.533 [2024-11-04 02:28:13.511575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:17:26.533 [2024-11-04 02:28:13.511581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.511647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.533 [2024-11-04 02:28:13.511653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.533 [2024-11-04 02:28:13.511659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:26.533 [2024-11-04 02:28:13.511666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.533 [2024-11-04 02:28:13.511745] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.533 [2024-11-04 02:28:13.511758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.533 [2024-11-04 02:28:13.511765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.533 [2024-11-04 02:28:13.511770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.533 [2024-11-04 02:28:13.511776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.533 [2024-11-04 02:28:13.511782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.533 [2024-11-04 02:28:13.511787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:26.533 [2024-11-04 02:28:13.511792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.533 [2024-11-04 02:28:13.511798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:26.533 [2024-11-04 02:28:13.511803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.533 [2024-11-04 02:28:13.511808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.533 [2024-11-04 02:28:13.511813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:26.533 [2024-11-04 02:28:13.511818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.533 [2024-11-04 02:28:13.511828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.533 [2024-11-04 02:28:13.511834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:26.533 [2024-11-04 02:28:13.511839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.533 [2024-11-04 02:28:13.511844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.533 [2024-11-04 02:28:13.511849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:26.534 [2024-11-04 02:28:13.511854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.534 [2024-11-04 02:28:13.511860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.534 [2024-11-04 02:28:13.511873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:26.534 [2024-11-04 02:28:13.511879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.534 [2024-11-04 02:28:13.511884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.534 [2024-11-04 02:28:13.511889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:26.534 [2024-11-04 02:28:13.511894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.534 [2024-11-04 02:28:13.511899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.534 [2024-11-04 02:28:13.511904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:26.534 [2024-11-04 02:28:13.511909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.534 [2024-11-04 02:28:13.511914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.534 [2024-11-04 02:28:13.511920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:26.534 [2024-11-04 02:28:13.511925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.534 [2024-11-04 02:28:13.511929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.534 [2024-11-04 02:28:13.511935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:26.534 [2024-11-04 02:28:13.511940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.534 [2024-11-04 02:28:13.511945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.534 [2024-11-04 02:28:13.511950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:26.534 [2024-11-04 02:28:13.511955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.534 [2024-11-04 02:28:13.511960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.534 [2024-11-04 02:28:13.511966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:26.534 [2024-11-04 02:28:13.511973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.534 [2024-11-04 02:28:13.511978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.534 [2024-11-04 02:28:13.511983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:26.534 [2024-11-04 02:28:13.511988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.534 [2024-11-04 02:28:13.511993] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.534 [2024-11-04 02:28:13.511999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.534 [2024-11-04 02:28:13.512005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.534 [2024-11-04 02:28:13.512010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.534 [2024-11-04 02:28:13.512018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.534 [2024-11-04 02:28:13.512023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.534 [2024-11-04 02:28:13.512028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.534 [2024-11-04 02:28:13.512033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.534 [2024-11-04 02:28:13.512038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.534 [2024-11-04 02:28:13.512043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.534 [2024-11-04 02:28:13.512049] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.534 [2024-11-04 02:28:13.512057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.534 [2024-11-04 02:28:13.512064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:26.534 [2024-11-04 02:28:13.512069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:26.534 [2024-11-04 02:28:13.512074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:26.534 [2024-11-04 02:28:13.512080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:26.534 [2024-11-04 02:28:13.512085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:26.534 [2024-11-04 02:28:13.512091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:26.534 [2024-11-04 02:28:13.512096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:26.534 [2024-11-04 02:28:13.512101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:26.534 [2024-11-04 02:28:13.512106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:26.534 [2024-11-04 02:28:13.512112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:26.534 [2024-11-04 02:28:13.512117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:26.534 [2024-11-04 02:28:13.512123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:26.534 [2024-11-04 02:28:13.512128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:26.534 [2024-11-04 02:28:13.512133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:26.534 [2024-11-04 02:28:13.512139] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.534 [2024-11-04 02:28:13.512145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.534 [2024-11-04 02:28:13.512152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.534 [2024-11-04 02:28:13.512157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.534 [2024-11-04 02:28:13.512163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.534 [2024-11-04 02:28:13.512168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.534 [2024-11-04 02:28:13.512174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-04 02:28:13.512179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.534 [2024-11-04 02:28:13.512185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:17:26.534 [2024-11-04 02:28:13.512192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-04 02:28:13.532716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-04 02:28:13.532835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.534 [2024-11-04 02:28:13.532847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.486 ms 00:17:26.534 [2024-11-04 02:28:13.532853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-04 02:28:13.532961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-04 02:28:13.532970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.534 [2024-11-04 02:28:13.532979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:26.534 [2024-11-04 02:28:13.532985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-04 02:28:13.571969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-04 02:28:13.572072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.534 [2024-11-04 02:28:13.572086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.967 ms 00:17:26.534 [2024-11-04 02:28:13.572094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-04 02:28:13.572153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-04 02:28:13.572162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.534 [2024-11-04 02:28:13.572169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:26.534 [2024-11-04 02:28:13.572174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-04 02:28:13.572449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-04 02:28:13.572461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.534 [2024-11-04 02:28:13.572468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:26.534 [2024-11-04 02:28:13.572474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-04 02:28:13.572580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-04 02:28:13.572590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.534 [2024-11-04 02:28:13.572597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:26.534 [2024-11-04 02:28:13.572602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-04 02:28:13.583282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-04 02:28:13.583375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.534 [2024-11-04 02:28:13.583387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.664 ms 00:17:26.534 [2024-11-04 02:28:13.583393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-04 02:28:13.593008] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:26.534 [2024-11-04 02:28:13.593034] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:26.534 [2024-11-04 02:28:13.593044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-04 02:28:13.593050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:26.534 [2024-11-04 02:28:13.593057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.561 ms 00:17:26.534 [2024-11-04 02:28:13.593063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-04 02:28:13.611474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-04 02:28:13.611506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:26.534 [2024-11-04 02:28:13.611522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.367 ms 00:17:26.534 [2024-11-04 02:28:13.611528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.534 [2024-11-04 02:28:13.620208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.534 [2024-11-04 02:28:13.620234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:26.534 [2024-11-04 02:28:13.620242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.618 ms 00:17:26.535 [2024-11-04 02:28:13.620248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.535 [2024-11-04 02:28:13.629015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.535 [2024-11-04 02:28:13.629039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:26.535 [2024-11-04 02:28:13.629047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.727 ms 00:17:26.535 [2024-11-04 02:28:13.629052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.535 [2024-11-04 02:28:13.629503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.535 [2024-11-04 02:28:13.629530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.535 [2024-11-04 02:28:13.629537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:17:26.535 [2024-11-04 02:28:13.629542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.793 [2024-11-04 02:28:13.672546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.793 [2024-11-04 02:28:13.672583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:26.793 [2024-11-04 02:28:13.672593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.987 ms 00:17:26.793 [2024-11-04 02:28:13.672599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.793 [2024-11-04 02:28:13.680190] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.793 [2024-11-04 02:28:13.691234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.793 [2024-11-04 02:28:13.691261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.793 [2024-11-04 02:28:13.691270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.577 ms 00:17:26.793 [2024-11-04 02:28:13.691278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.793 [2024-11-04 02:28:13.691345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.793 [2024-11-04 02:28:13.691352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:26.793 [2024-11-04 02:28:13.691361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:26.793 [2024-11-04 02:28:13.691367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.793 [2024-11-04 02:28:13.691403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.793 [2024-11-04 02:28:13.691410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.793 [2024-11-04 02:28:13.691417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:26.793 [2024-11-04 02:28:13.691422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.793 [2024-11-04 02:28:13.691441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.793 [2024-11-04 02:28:13.691450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.793 [2024-11-04 02:28:13.691457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.793 [2024-11-04 02:28:13.691464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.793 [2024-11-04 02:28:13.691486] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:26.793 [2024-11-04 02:28:13.691493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.793 [2024-11-04 02:28:13.691499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:26.793 [2024-11-04 02:28:13.691505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:26.793 [2024-11-04 02:28:13.691511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.793 [2024-11-04 02:28:13.709310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.793 [2024-11-04 02:28:13.709337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.793 [2024-11-04 02:28:13.709349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.785 ms 00:17:26.793 [2024-11-04 02:28:13.709355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.793 [2024-11-04 02:28:13.709424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.793 [2024-11-04 02:28:13.709432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.793 [2024-11-04 02:28:13.709439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:26.793 [2024-11-04 02:28:13.709444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.793 [2024-11-04 02:28:13.710076] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.793 [2024-11-04 02:28:13.712336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 218.848 ms, result 0 00:17:26.793 [2024-11-04 02:28:13.712892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.793 [2024-11-04 02:28:13.727535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.725  [2024-11-04T02:28:15.776Z] Copying: 48/256 [MB] (48 MBps) [2024-11-04T02:28:17.163Z] Copying: 82/256 [MB] (33 MBps) [2024-11-04T02:28:17.735Z] Copying: 94/256 [MB] (12 MBps) [2024-11-04T02:28:19.121Z] Copying: 110/256 [MB] (15 MBps) [2024-11-04T02:28:20.065Z] Copying: 132/256 [MB] (21 MBps) [2024-11-04T02:28:21.008Z] Copying: 145/256 [MB] (13 MBps) [2024-11-04T02:28:21.952Z] Copying: 158/256 [MB] (12 MBps) [2024-11-04T02:28:22.896Z] Copying: 177/256 [MB] (19 MBps) [2024-11-04T02:28:23.884Z] Copying: 188/256 [MB] (10 MBps) [2024-11-04T02:28:24.849Z] Copying: 198/256 [MB] (10 MBps) [2024-11-04T02:28:25.795Z] Copying: 209/256 [MB] (11 MBps) [2024-11-04T02:28:26.737Z] Copying: 220/256 [MB] (10 MBps) [2024-11-04T02:28:28.126Z] Copying: 237/256 [MB] (17 MBps) [2024-11-04T02:28:28.126Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-04 02:28:27.708198] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.015 [2024-11-04 02:28:27.718568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.015 [2024-11-04 02:28:27.718616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:41.015 [2024-11-04 02:28:27.718632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.015 [2024-11-04 02:28:27.718641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.015 [2024-11-04 02:28:27.718665] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:41.015 [2024-11-04 02:28:27.721658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.015 [2024-11-04 02:28:27.721697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:41.015 [2024-11-04 02:28:27.721715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.977 ms 00:17:41.015 [2024-11-04 02:28:27.721724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.015 [2024-11-04 02:28:27.724756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.015 [2024-11-04 02:28:27.724801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:41.015 [2024-11-04 02:28:27.724812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.003 ms 00:17:41.015 [2024-11-04 02:28:27.724820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.015 [2024-11-04 02:28:27.733585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.015 [2024-11-04 02:28:27.733772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:41.015 [2024-11-04 02:28:27.733792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.745 ms 00:17:41.015 [2024-11-04 02:28:27.733808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.015 [2024-11-04 02:28:27.740722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.015 [2024-11-04 02:28:27.740888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:41.015 [2024-11-04 02:28:27.740909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.868 ms 00:17:41.015 [2024-11-04 02:28:27.740918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.015 [2024-11-04 02:28:27.766466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.015 [2024-11-04 02:28:27.766508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:41.015 [2024-11-04 02:28:27.766520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.474 ms 00:17:41.015 [2024-11-04 02:28:27.766528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.015 [2024-11-04 02:28:27.782694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.015 [2024-11-04 02:28:27.782742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:41.015 [2024-11-04 02:28:27.782755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.119 ms 00:17:41.015 [2024-11-04 02:28:27.782770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.015 [2024-11-04 02:28:27.782945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.015 [2024-11-04 02:28:27.782957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:41.015 [2024-11-04 02:28:27.782967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:17:41.015 [2024-11-04 02:28:27.782974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.015 [2024-11-04 02:28:27.808662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.015 [2024-11-04 02:28:27.808839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:41.015 [2024-11-04 02:28:27.808861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.670 ms 00:17:41.015 [2024-11-04 02:28:27.808882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.015 [2024-11-04 02:28:27.834647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.015 [2024-11-04 02:28:27.834689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:41.015 [2024-11-04 02:28:27.834700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.647 ms 00:17:41.015 [2024-11-04 02:28:27.834708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.015 [2024-11-04 02:28:27.859329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.016 [2024-11-04 02:28:27.859373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:41.016 [2024-11-04 02:28:27.859385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.558 ms 00:17:41.016 [2024-11-04 02:28:27.859392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.016 [2024-11-04 02:28:27.883727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.016 [2024-11-04 02:28:27.883774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:41.016 [2024-11-04 02:28:27.883786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.257 ms 00:17:41.016 [2024-11-04 02:28:27.883794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.016 [2024-11-04 02:28:27.883841] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:41.016 [2024-11-04 02:28:27.883857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.883996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:41.016 [2024-11-04 02:28:27.884501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:41.017 [2024-11-04 02:28:27.884662] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:41.017 [2024-11-04 02:28:27.884672] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6f3f3b2-f208-412f-922b-b47e9684c98b 00:17:41.017 [2024-11-04 02:28:27.884681] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:41.017 [2024-11-04 02:28:27.884689] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:41.017 [2024-11-04 02:28:27.884696] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:41.017 [2024-11-04 02:28:27.884704] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:41.017 [2024-11-04 02:28:27.884710] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:41.017 [2024-11-04 02:28:27.884719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:41.017 [2024-11-04 02:28:27.884727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:41.017 [2024-11-04 02:28:27.884734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:41.017 [2024-11-04 02:28:27.884740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:41.017 [2024-11-04 02:28:27.884748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.017 [2024-11-04 02:28:27.884756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:41.017 [2024-11-04 02:28:27.884765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:17:41.017 [2024-11-04 02:28:27.884773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:27.898217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.017 [2024-11-04 02:28:27.898378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:41.017 [2024-11-04 02:28:27.898396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.407 ms 00:17:41.017 [2024-11-04 02:28:27.898405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:27.898800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.017 [2024-11-04 02:28:27.898811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:41.017 [2024-11-04 02:28:27.898828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:17:41.017 [2024-11-04 02:28:27.898836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:27.937810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:27.937995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.017 [2024-11-04 02:28:27.938016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:27.938025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:27.938118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:27.938128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.017 [2024-11-04 02:28:27.938140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:27.938148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:27.938201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:27.938211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.017 [2024-11-04 02:28:27.938219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:27.938227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:27.938245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:27.938254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.017 [2024-11-04 02:28:27.938262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:27.938273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:28.021252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:28.021474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.017 [2024-11-04 02:28:28.021495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:28.021503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:28.090240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:28.090294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.017 [2024-11-04 02:28:28.090307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:28.090322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:28.090383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:28.090393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.017 [2024-11-04 02:28:28.090402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:28.090410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:28.090443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:28.090453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.017 [2024-11-04 02:28:28.090461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:28.090469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:28.090569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:28.090580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.017 [2024-11-04 02:28:28.090589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:28.090597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:28.090630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:28.090640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:41.017 [2024-11-04 02:28:28.090649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:28.090657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:28.090703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:28.090712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.017 [2024-11-04 02:28:28.090721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:28.090729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:28.090780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.017 [2024-11-04 02:28:28.090790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.017 [2024-11-04 02:28:28.090799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.017 [2024-11-04 02:28:28.090807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.017 [2024-11-04 02:28:28.091007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.421 ms, result 0 00:17:41.991 00:17:41.991 00:17:41.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.991 02:28:29 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73938 00:17:41.991 02:28:29 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73938 00:17:41.991 02:28:29 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:41.991 02:28:29 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73938 ']' 00:17:41.991 02:28:29 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.991 02:28:29 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:41.991 02:28:29 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.991 02:28:29 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:41.991 02:28:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:42.252 [2024-11-04 02:28:29.195233] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:17:42.252 [2024-11-04 02:28:29.195403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73938 ] 00:17:42.252 [2024-11-04 02:28:29.361764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.513 [2024-11-04 02:28:29.481104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.084 02:28:30 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:43.084 02:28:30 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:43.084 02:28:30 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:43.345 [2024-11-04 02:28:30.386914] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.345 [2024-11-04 02:28:30.386989] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.608 [2024-11-04 02:28:30.566150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.608 [2024-11-04 02:28:30.566208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:43.608 [2024-11-04 02:28:30.566231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:43.608 [2024-11-04 02:28:30.566246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.608 [2024-11-04 02:28:30.569303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.608 [2024-11-04 02:28:30.569508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.608 [2024-11-04 02:28:30.569532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:17:43.608 [2024-11-04 02:28:30.569541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.608 [2024-11-04 02:28:30.569675] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:43.608 [2024-11-04 02:28:30.570480] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:43.608 [2024-11-04 02:28:30.570514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.608 [2024-11-04 02:28:30.570523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.608 [2024-11-04 02:28:30.570534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:17:43.608 [2024-11-04 02:28:30.570542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.608 [2024-11-04 02:28:30.572323] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:43.608 [2024-11-04 02:28:30.586619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.608 [2024-11-04 02:28:30.586677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:43.608 [2024-11-04 02:28:30.586691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.305 ms 00:17:43.608 [2024-11-04 02:28:30.586702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.608 [2024-11-04 02:28:30.586813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.608 [2024-11-04 02:28:30.586828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:43.608 [2024-11-04 02:28:30.586837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:43.608 [2024-11-04 02:28:30.586846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.608 [2024-11-04 02:28:30.594984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.608 [2024-11-04 02:28:30.595035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.608 [2024-11-04 02:28:30.595045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.064 ms 00:17:43.608 [2024-11-04 02:28:30.595055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.608 [2024-11-04 02:28:30.595172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.608 [2024-11-04 02:28:30.595185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.608 [2024-11-04 02:28:30.595194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:43.608 [2024-11-04 02:28:30.595204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.608 [2024-11-04 02:28:30.595229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.608 [2024-11-04 02:28:30.595243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:43.608 [2024-11-04 02:28:30.595251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:43.608 [2024-11-04 02:28:30.595261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.608 [2024-11-04 02:28:30.595284] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:43.608 [2024-11-04 02:28:30.599421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.608 [2024-11-04 02:28:30.599458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.608 [2024-11-04 02:28:30.599471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.140 ms 00:17:43.608 [2024-11-04 02:28:30.599479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.608 [2024-11-04 02:28:30.599556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.608 [2024-11-04 02:28:30.599565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:43.608 [2024-11-04 02:28:30.599576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:43.608 [2024-11-04 02:28:30.599584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.608 [2024-11-04 02:28:30.599608] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:43.608 [2024-11-04 02:28:30.599631] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:43.608 [2024-11-04 02:28:30.599675] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:43.608 [2024-11-04 02:28:30.599693] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:43.608 [2024-11-04 02:28:30.599816] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:43.608 [2024-11-04 02:28:30.599828] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:43.608 [2024-11-04 02:28:30.599842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:43.608 [2024-11-04 02:28:30.599852] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:43.608 [2024-11-04 02:28:30.599885] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:43.608 [2024-11-04 02:28:30.599895] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:43.608 [2024-11-04 02:28:30.599904] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:43.608 [2024-11-04 02:28:30.599913] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:43.608 [2024-11-04 02:28:30.599926] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:43.608 [2024-11-04 02:28:30.599934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.609 [2024-11-04 02:28:30.599943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:43.609 [2024-11-04 02:28:30.599952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:17:43.609 [2024-11-04 02:28:30.599962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.609 [2024-11-04 02:28:30.600049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.609 [2024-11-04 02:28:30.600061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:43.609 [2024-11-04 02:28:30.600071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:43.609 [2024-11-04 02:28:30.600079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.609 [2024-11-04 02:28:30.600180] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:43.609 [2024-11-04 02:28:30.600192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:43.609 [2024-11-04 02:28:30.600200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.609 [2024-11-04 02:28:30.600211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:43.609 [2024-11-04 02:28:30.600228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:43.609 [2024-11-04 02:28:30.600247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:43.609 [2024-11-04 02:28:30.600255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.609 [2024-11-04 02:28:30.600270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:43.609 [2024-11-04 02:28:30.600279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:43.609 [2024-11-04 02:28:30.600285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.609 [2024-11-04 02:28:30.600294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:43.609 [2024-11-04 02:28:30.600301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:43.609 [2024-11-04 02:28:30.600310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:43.609 [2024-11-04 02:28:30.600328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:43.609 [2024-11-04 02:28:30.600335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:43.609 [2024-11-04 02:28:30.600358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.609 [2024-11-04 02:28:30.600373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:43.609 [2024-11-04 02:28:30.600383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.609 [2024-11-04 02:28:30.600398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:43.609 [2024-11-04 02:28:30.600405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.609 [2024-11-04 02:28:30.600420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:43.609 [2024-11-04 02:28:30.600428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.609 [2024-11-04 02:28:30.600444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:43.609 [2024-11-04 02:28:30.600451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.609 [2024-11-04 02:28:30.600466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:43.609 [2024-11-04 02:28:30.600474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:43.609 [2024-11-04 02:28:30.600480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.609 [2024-11-04 02:28:30.600489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:43.609 [2024-11-04 02:28:30.600496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:43.609 [2024-11-04 02:28:30.600506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:43.609 [2024-11-04 02:28:30.600521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:43.609 [2024-11-04 02:28:30.600528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600536] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:43.609 [2024-11-04 02:28:30.600543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:43.609 [2024-11-04 02:28:30.600553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.609 [2024-11-04 02:28:30.600562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.609 [2024-11-04 02:28:30.600572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:43.609 [2024-11-04 02:28:30.600581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:43.609 [2024-11-04 02:28:30.600591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:43.609 [2024-11-04 02:28:30.600599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:43.609 [2024-11-04 02:28:30.600608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:43.609 [2024-11-04 02:28:30.600615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:43.609 [2024-11-04 02:28:30.600625] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:43.609 [2024-11-04 02:28:30.600634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.609 [2024-11-04 02:28:30.600648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:43.609 [2024-11-04 02:28:30.600656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:43.609 [2024-11-04 02:28:30.600665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:43.609 [2024-11-04 02:28:30.600672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:43.609 [2024-11-04 02:28:30.600682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:43.609 [2024-11-04 02:28:30.600688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:43.609 [2024-11-04 02:28:30.600697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:43.609 [2024-11-04 02:28:30.600704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:43.609 [2024-11-04 02:28:30.600713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:43.609 [2024-11-04 02:28:30.600720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:43.609 [2024-11-04 02:28:30.600729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:43.609 [2024-11-04 02:28:30.600736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:43.609 [2024-11-04 02:28:30.600745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:43.609 [2024-11-04 02:28:30.600752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:43.609 [2024-11-04 02:28:30.600761] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:43.609 [2024-11-04 02:28:30.600769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.609 [2024-11-04 02:28:30.600781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:43.609 [2024-11-04 02:28:30.600788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:43.609 [2024-11-04 02:28:30.600797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:43.609 [2024-11-04 02:28:30.600804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:43.609 [2024-11-04 02:28:30.600814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.609 [2024-11-04 02:28:30.600822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:43.609 [2024-11-04 02:28:30.600832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:17:43.609 [2024-11-04 02:28:30.600840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.609 [2024-11-04 02:28:30.632999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.609 [2024-11-04 02:28:30.633049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.609 [2024-11-04 02:28:30.633064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.083 ms 00:17:43.609 [2024-11-04 02:28:30.633072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.609 [2024-11-04 02:28:30.633208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.609 [2024-11-04 02:28:30.633221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:43.609 [2024-11-04 02:28:30.633232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:43.609 [2024-11-04 02:28:30.633240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.609 [2024-11-04 02:28:30.668082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.610 [2024-11-04 02:28:30.668126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.610 [2024-11-04 02:28:30.668141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.817 ms 00:17:43.610 [2024-11-04 02:28:30.668152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.610 [2024-11-04 02:28:30.668239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.610 [2024-11-04 02:28:30.668249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.610 [2024-11-04 02:28:30.668260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:43.610 [2024-11-04 02:28:30.668268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.610 [2024-11-04 02:28:30.668770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.610 [2024-11-04 02:28:30.668800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.610 [2024-11-04 02:28:30.668814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:17:43.610 [2024-11-04 02:28:30.668825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.610 [2024-11-04 02:28:30.669007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.610 [2024-11-04 02:28:30.669023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.610 [2024-11-04 02:28:30.669034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:17:43.610 [2024-11-04 02:28:30.669043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.610 [2024-11-04 02:28:30.687166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.610 [2024-11-04 02:28:30.687207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.610 [2024-11-04 02:28:30.687221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.097 ms 00:17:43.610 [2024-11-04 02:28:30.687229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.610 [2024-11-04 02:28:30.701483] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:43.610 [2024-11-04 02:28:30.701527] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:43.610 [2024-11-04 02:28:30.701543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.610 [2024-11-04 02:28:30.701552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:43.610 [2024-11-04 02:28:30.701564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.200 ms 00:17:43.610 [2024-11-04 02:28:30.701571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.727228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.727278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:43.871 [2024-11-04 02:28:30.727293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.562 ms 00:17:43.871 [2024-11-04 02:28:30.727302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.740282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.740325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:43.871 [2024-11-04 02:28:30.740343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.883 ms 00:17:43.871 [2024-11-04 02:28:30.740351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.752912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.753097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:43.871 [2024-11-04 02:28:30.753123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.470 ms 00:17:43.871 [2024-11-04 02:28:30.753131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.753861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.753921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:43.871 [2024-11-04 02:28:30.753936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:17:43.871 [2024-11-04 02:28:30.753945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.833837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.833920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:43.871 [2024-11-04 02:28:30.833941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.860 ms 00:17:43.871 [2024-11-04 02:28:30.833951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.845073] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:43.871 [2024-11-04 02:28:30.864162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.864224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:43.871 [2024-11-04 02:28:30.864239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.107 ms 00:17:43.871 [2024-11-04 02:28:30.864251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.864340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.864353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:43.871 [2024-11-04 02:28:30.864363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:43.871 [2024-11-04 02:28:30.864373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.864430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.864442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:43.871 [2024-11-04 02:28:30.864449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:43.871 [2024-11-04 02:28:30.864459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.864488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.864499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:43.871 [2024-11-04 02:28:30.864508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:43.871 [2024-11-04 02:28:30.864520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.864557] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:43.871 [2024-11-04 02:28:30.864571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.864579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:43.871 [2024-11-04 02:28:30.864590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:43.871 [2024-11-04 02:28:30.864601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.890625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.890675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:43.871 [2024-11-04 02:28:30.890692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.996 ms 00:17:43.871 [2024-11-04 02:28:30.890701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.871 [2024-11-04 02:28:30.890834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.871 [2024-11-04 02:28:30.890846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:43.872 [2024-11-04 02:28:30.890859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:43.872 [2024-11-04 02:28:30.890889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.872 [2024-11-04 02:28:30.892016] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:43.872 [2024-11-04 02:28:30.895281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 325.488 ms, result 0 00:17:43.872 [2024-11-04 02:28:30.897329] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:43.872 Some configs were skipped because the RPC state that can call them passed over. 00:17:43.872 02:28:30 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:44.133 [2024-11-04 02:28:31.142204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.133 [2024-11-04 02:28:31.142408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:44.133 [2024-11-04 02:28:31.142477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:17:44.133 [2024-11-04 02:28:31.142506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.133 [2024-11-04 02:28:31.142563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.587 ms, result 0 00:17:44.133 true 00:17:44.133 02:28:31 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:44.395 [2024-11-04 02:28:31.366185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.395 [2024-11-04 02:28:31.366241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:44.395 [2024-11-04 02:28:31.366256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.916 ms 00:17:44.395 [2024-11-04 02:28:31.366265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.395 [2024-11-04 02:28:31.366307] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.043 ms, result 0 00:17:44.395 true 00:17:44.395 02:28:31 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73938 00:17:44.395 02:28:31 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73938 ']' 00:17:44.395 02:28:31 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73938 00:17:44.395 02:28:31 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:44.395 02:28:31 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:44.395 02:28:31 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73938 00:17:44.395 killing process with pid 73938 00:17:44.395 02:28:31 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:44.395 02:28:31 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:44.395 02:28:31 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73938' 00:17:44.395 02:28:31 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73938 00:17:44.395 02:28:31 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73938 00:17:45.336 [2024-11-04 02:28:32.140161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.140211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:45.336 [2024-11-04 02:28:32.140222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:45.336 [2024-11-04 02:28:32.140229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.140246] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:45.336 [2024-11-04 02:28:32.142390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.142415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:45.336 [2024-11-04 02:28:32.142428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:17:45.336 [2024-11-04 02:28:32.142434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.142649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.142658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:45.336 [2024-11-04 02:28:32.142665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:17:45.336 [2024-11-04 02:28:32.142672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.145894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.145918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:45.336 [2024-11-04 02:28:32.145926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:17:45.336 [2024-11-04 02:28:32.145935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.151172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.151309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:45.336 [2024-11-04 02:28:32.151327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.210 ms 00:17:45.336 [2024-11-04 02:28:32.151333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.158838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.158948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:45.336 [2024-11-04 02:28:32.158964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.449 ms 00:17:45.336 [2024-11-04 02:28:32.158975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.165336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.165421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:45.336 [2024-11-04 02:28:32.165473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.332 ms 00:17:45.336 [2024-11-04 02:28:32.165494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.165603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.165677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:45.336 [2024-11-04 02:28:32.165727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:45.336 [2024-11-04 02:28:32.165741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.173822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.173920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:45.336 [2024-11-04 02:28:32.173969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.056 ms 00:17:45.336 [2024-11-04 02:28:32.173987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.180976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.181061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:45.336 [2024-11-04 02:28:32.181108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.954 ms 00:17:45.336 [2024-11-04 02:28:32.181124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.188171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.188248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:45.336 [2024-11-04 02:28:32.188297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.003 ms 00:17:45.336 [2024-11-04 02:28:32.188315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.195274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.336 [2024-11-04 02:28:32.195352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:45.336 [2024-11-04 02:28:32.195391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.906 ms 00:17:45.336 [2024-11-04 02:28:32.195408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.336 [2024-11-04 02:28:32.195448] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:45.337 [2024-11-04 02:28:32.195497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.195937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.196977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.197995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:45.337 [2024-11-04 02:28:32.198452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:45.338 [2024-11-04 02:28:32.198899] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:45.338 [2024-11-04 02:28:32.198917] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6f3f3b2-f208-412f-922b-b47e9684c98b 00:17:45.338 [2024-11-04 02:28:32.198944] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:45.338 [2024-11-04 02:28:32.199006] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:45.338 [2024-11-04 02:28:32.199060] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:45.338 [2024-11-04 02:28:32.199133] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:45.338 [2024-11-04 02:28:32.199151] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:45.338 [2024-11-04 02:28:32.199167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:45.338 [2024-11-04 02:28:32.199181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:45.338 [2024-11-04 02:28:32.199196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:45.338 [2024-11-04 02:28:32.199209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:45.338 [2024-11-04 02:28:32.199225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.338 [2024-11-04 02:28:32.199240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:45.338 [2024-11-04 02:28:32.199256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.778 ms 00:17:45.338 [2024-11-04 02:28:32.199307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.208946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.338 [2024-11-04 02:28:32.209026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:45.338 [2024-11-04 02:28:32.209069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.607 ms 00:17:45.338 [2024-11-04 02:28:32.209086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.209407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.338 [2024-11-04 02:28:32.209468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:45.338 [2024-11-04 02:28:32.209512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:45.338 [2024-11-04 02:28:32.209529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.244129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.244216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:45.338 [2024-11-04 02:28:32.244256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.244272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.244355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.244374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:45.338 [2024-11-04 02:28:32.244390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.244404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.244451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.244468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:45.338 [2024-11-04 02:28:32.244486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.244539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.244565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.244607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:45.338 [2024-11-04 02:28:32.244625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.244667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.302871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.302991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:45.338 [2024-11-04 02:28:32.303007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.303013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.350554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.350615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:45.338 [2024-11-04 02:28:32.350626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.350632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.350691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.350701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:45.338 [2024-11-04 02:28:32.350709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.350715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.350739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.350745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:45.338 [2024-11-04 02:28:32.350752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.350757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.350826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.350833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:45.338 [2024-11-04 02:28:32.350842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.350849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.350885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.350892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:45.338 [2024-11-04 02:28:32.350900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.350905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.350934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.350941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:45.338 [2024-11-04 02:28:32.350951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.350956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.350990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.338 [2024-11-04 02:28:32.350996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:45.338 [2024-11-04 02:28:32.351004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.338 [2024-11-04 02:28:32.351009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.338 [2024-11-04 02:28:32.351114] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 210.935 ms, result 0 00:17:45.907 02:28:32 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:45.907 02:28:32 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:45.907 [2024-11-04 02:28:32.924934] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:17:45.907 [2024-11-04 02:28:32.925059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73991 ] 00:17:46.298 [2024-11-04 02:28:33.081695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.298 [2024-11-04 02:28:33.166239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.298 [2024-11-04 02:28:33.371809] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.298 [2024-11-04 02:28:33.371858] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.557 [2024-11-04 02:28:33.523814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.557 [2024-11-04 02:28:33.523851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:46.557 [2024-11-04 02:28:33.523862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:46.557 [2024-11-04 02:28:33.523881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.557 [2024-11-04 02:28:33.525917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.557 [2024-11-04 02:28:33.525945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:46.557 [2024-11-04 02:28:33.525952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:17:46.557 [2024-11-04 02:28:33.525958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.557 [2024-11-04 02:28:33.526013] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:46.557 [2024-11-04 02:28:33.526608] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:46.557 [2024-11-04 02:28:33.526625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.557 [2024-11-04 02:28:33.526631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:46.557 [2024-11-04 02:28:33.526638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:17:46.557 [2024-11-04 02:28:33.526644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.557 [2024-11-04 02:28:33.527603] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:46.557 [2024-11-04 02:28:33.537192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.557 [2024-11-04 02:28:33.537218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:46.557 [2024-11-04 02:28:33.537230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.590 ms 00:17:46.557 [2024-11-04 02:28:33.537236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.557 [2024-11-04 02:28:33.537302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.557 [2024-11-04 02:28:33.537311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:46.557 [2024-11-04 02:28:33.537317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:46.557 [2024-11-04 02:28:33.537323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.557 [2024-11-04 02:28:33.541572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.557 [2024-11-04 02:28:33.541598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:46.557 [2024-11-04 02:28:33.541605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:17:46.557 [2024-11-04 02:28:33.541611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.557 [2024-11-04 02:28:33.541679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.557 [2024-11-04 02:28:33.541686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:46.557 [2024-11-04 02:28:33.541692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:46.557 [2024-11-04 02:28:33.541698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.557 [2024-11-04 02:28:33.541714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.557 [2024-11-04 02:28:33.541719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:46.557 [2024-11-04 02:28:33.541727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:46.557 [2024-11-04 02:28:33.541732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.557 [2024-11-04 02:28:33.541750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:46.557 [2024-11-04 02:28:33.544365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.557 [2024-11-04 02:28:33.544481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:46.557 [2024-11-04 02:28:33.544494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:17:46.557 [2024-11-04 02:28:33.544500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.557 [2024-11-04 02:28:33.544529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.557 [2024-11-04 02:28:33.544535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:46.557 [2024-11-04 02:28:33.544542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:46.557 [2024-11-04 02:28:33.544548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.557 [2024-11-04 02:28:33.544561] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:46.557 [2024-11-04 02:28:33.544575] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:46.557 [2024-11-04 02:28:33.544604] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:46.557 [2024-11-04 02:28:33.544615] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:46.557 [2024-11-04 02:28:33.544693] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:46.557 [2024-11-04 02:28:33.544702] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:46.557 [2024-11-04 02:28:33.544710] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:46.557 [2024-11-04 02:28:33.544718] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:46.557 [2024-11-04 02:28:33.544725] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:46.557 [2024-11-04 02:28:33.544733] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:46.557 [2024-11-04 02:28:33.544739] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:46.557 [2024-11-04 02:28:33.544745] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:46.558 [2024-11-04 02:28:33.544754] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:46.558 [2024-11-04 02:28:33.544760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.558 [2024-11-04 02:28:33.544765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:46.558 [2024-11-04 02:28:33.544771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:17:46.558 [2024-11-04 02:28:33.544777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.558 [2024-11-04 02:28:33.544843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.558 [2024-11-04 02:28:33.544849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:46.558 [2024-11-04 02:28:33.544855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:46.558 [2024-11-04 02:28:33.544863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.558 [2024-11-04 02:28:33.544950] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:46.558 [2024-11-04 02:28:33.544957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:46.558 [2024-11-04 02:28:33.544964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.558 [2024-11-04 02:28:33.544970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.558 [2024-11-04 02:28:33.544976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:46.558 [2024-11-04 02:28:33.544981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:46.558 [2024-11-04 02:28:33.544987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:46.558 [2024-11-04 02:28:33.544992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:46.558 [2024-11-04 02:28:33.544998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:46.558 [2024-11-04 02:28:33.545003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.558 [2024-11-04 02:28:33.545008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:46.558 [2024-11-04 02:28:33.545013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:46.558 [2024-11-04 02:28:33.545018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.558 [2024-11-04 02:28:33.545027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:46.558 [2024-11-04 02:28:33.545033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:46.558 [2024-11-04 02:28:33.545037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.558 [2024-11-04 02:28:33.545042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:46.558 [2024-11-04 02:28:33.545047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:46.558 [2024-11-04 02:28:33.545052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.558 [2024-11-04 02:28:33.545057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:46.558 [2024-11-04 02:28:33.545063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:46.558 [2024-11-04 02:28:33.545068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.558 [2024-11-04 02:28:33.545073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:46.558 [2024-11-04 02:28:33.545078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:46.558 [2024-11-04 02:28:33.545083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.558 [2024-11-04 02:28:33.545087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:46.558 [2024-11-04 02:28:33.545092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:46.558 [2024-11-04 02:28:33.545097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.558 [2024-11-04 02:28:33.545102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:46.558 [2024-11-04 02:28:33.545107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:46.558 [2024-11-04 02:28:33.545112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.558 [2024-11-04 02:28:33.545117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:46.558 [2024-11-04 02:28:33.545122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:46.558 [2024-11-04 02:28:33.545126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.558 [2024-11-04 02:28:33.545131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:46.558 [2024-11-04 02:28:33.545137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:46.558 [2024-11-04 02:28:33.545141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.558 [2024-11-04 02:28:33.545146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:46.558 [2024-11-04 02:28:33.545151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:46.558 [2024-11-04 02:28:33.545156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.558 [2024-11-04 02:28:33.545161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:46.558 [2024-11-04 02:28:33.545166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:46.558 [2024-11-04 02:28:33.545171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.558 [2024-11-04 02:28:33.545176] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:46.558 [2024-11-04 02:28:33.545182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:46.558 [2024-11-04 02:28:33.545188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.558 [2024-11-04 02:28:33.545193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.558 [2024-11-04 02:28:33.545201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:46.558 [2024-11-04 02:28:33.545206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:46.558 [2024-11-04 02:28:33.545211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:46.558 [2024-11-04 02:28:33.545216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:46.558 [2024-11-04 02:28:33.545221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:46.558 [2024-11-04 02:28:33.545226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:46.558 [2024-11-04 02:28:33.545232] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:46.558 [2024-11-04 02:28:33.545239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.558 [2024-11-04 02:28:33.545245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:46.558 [2024-11-04 02:28:33.545251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:46.558 [2024-11-04 02:28:33.545256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:46.558 [2024-11-04 02:28:33.545261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:46.558 [2024-11-04 02:28:33.545267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:46.558 [2024-11-04 02:28:33.545272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:46.558 [2024-11-04 02:28:33.545277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:46.558 [2024-11-04 02:28:33.545282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:46.558 [2024-11-04 02:28:33.545287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:46.558 [2024-11-04 02:28:33.545292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:46.558 [2024-11-04 02:28:33.545297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:46.558 [2024-11-04 02:28:33.545303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:46.558 [2024-11-04 02:28:33.545308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:46.558 [2024-11-04 02:28:33.545314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:46.558 [2024-11-04 02:28:33.545319] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:46.558 [2024-11-04 02:28:33.545325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.558 [2024-11-04 02:28:33.545331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:46.558 [2024-11-04 02:28:33.545337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:46.558 [2024-11-04 02:28:33.545343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:46.558 [2024-11-04 02:28:33.545349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:46.558 [2024-11-04 02:28:33.545354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.558 [2024-11-04 02:28:33.545360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:46.558 [2024-11-04 02:28:33.545367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:17:46.558 [2024-11-04 02:28:33.545374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.558 [2024-11-04 02:28:33.565841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.559 [2024-11-04 02:28:33.565878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.559 [2024-11-04 02:28:33.565886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.430 ms 00:17:46.559 [2024-11-04 02:28:33.565892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.559 [2024-11-04 02:28:33.565983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.559 [2024-11-04 02:28:33.565991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:46.559 [2024-11-04 02:28:33.566000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:46.559 [2024-11-04 02:28:33.566006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.559 [2024-11-04 02:28:33.612250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.559 [2024-11-04 02:28:33.612280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.559 [2024-11-04 02:28:33.612289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.227 ms 00:17:46.559 [2024-11-04 02:28:33.612296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.559 [2024-11-04 02:28:33.612352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.559 [2024-11-04 02:28:33.612360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.559 [2024-11-04 02:28:33.612367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:46.559 [2024-11-04 02:28:33.612372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.559 [2024-11-04 02:28:33.612657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.559 [2024-11-04 02:28:33.612680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.559 [2024-11-04 02:28:33.612688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:17:46.559 [2024-11-04 02:28:33.612694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.559 [2024-11-04 02:28:33.612798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.559 [2024-11-04 02:28:33.612813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.559 [2024-11-04 02:28:33.612819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:46.559 [2024-11-04 02:28:33.612825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.559 [2024-11-04 02:28:33.623442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.559 [2024-11-04 02:28:33.623466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.559 [2024-11-04 02:28:33.623474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.601 ms 00:17:46.559 [2024-11-04 02:28:33.623480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.559 [2024-11-04 02:28:33.633212] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:46.559 [2024-11-04 02:28:33.633238] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:46.559 [2024-11-04 02:28:33.633248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.559 [2024-11-04 02:28:33.633254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:46.559 [2024-11-04 02:28:33.633261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.678 ms 00:17:46.559 [2024-11-04 02:28:33.633267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.559 [2024-11-04 02:28:33.651419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.559 [2024-11-04 02:28:33.651451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:46.559 [2024-11-04 02:28:33.651460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.107 ms 00:17:46.559 [2024-11-04 02:28:33.651466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.559 [2024-11-04 02:28:33.660026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.559 [2024-11-04 02:28:33.660050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:46.559 [2024-11-04 02:28:33.660057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.507 ms 00:17:46.559 [2024-11-04 02:28:33.660063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.819 [2024-11-04 02:28:33.668420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.819 [2024-11-04 02:28:33.668443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:46.819 [2024-11-04 02:28:33.668450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.316 ms 00:17:46.819 [2024-11-04 02:28:33.668456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.819 [2024-11-04 02:28:33.668912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.819 [2024-11-04 02:28:33.668931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:46.819 [2024-11-04 02:28:33.668938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:17:46.819 [2024-11-04 02:28:33.668944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.819 [2024-11-04 02:28:33.711916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.819 [2024-11-04 02:28:33.711954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:46.819 [2024-11-04 02:28:33.711965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.954 ms 00:17:46.819 [2024-11-04 02:28:33.711972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.819 [2024-11-04 02:28:33.719593] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:46.819 [2024-11-04 02:28:33.730669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.819 [2024-11-04 02:28:33.730789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:46.819 [2024-11-04 02:28:33.730803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.638 ms 00:17:46.819 [2024-11-04 02:28:33.730809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.819 [2024-11-04 02:28:33.730895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.819 [2024-11-04 02:28:33.730905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:46.819 [2024-11-04 02:28:33.730912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:46.819 [2024-11-04 02:28:33.730918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.819 [2024-11-04 02:28:33.730956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.819 [2024-11-04 02:28:33.730963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:46.819 [2024-11-04 02:28:33.730970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:46.819 [2024-11-04 02:28:33.730976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.819 [2024-11-04 02:28:33.730994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.819 [2024-11-04 02:28:33.731001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:46.819 [2024-11-04 02:28:33.731008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:46.819 [2024-11-04 02:28:33.731014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.819 [2024-11-04 02:28:33.731039] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:46.819 [2024-11-04 02:28:33.731047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.819 [2024-11-04 02:28:33.731053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:46.819 [2024-11-04 02:28:33.731059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:46.819 [2024-11-04 02:28:33.731065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.819 [2024-11-04 02:28:33.748660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.819 [2024-11-04 02:28:33.748689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:46.819 [2024-11-04 02:28:33.748698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.581 ms 00:17:46.819 [2024-11-04 02:28:33.748704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.819 [2024-11-04 02:28:33.748773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.819 [2024-11-04 02:28:33.748781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:46.819 [2024-11-04 02:28:33.748788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:46.819 [2024-11-04 02:28:33.748794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.819 [2024-11-04 02:28:33.749420] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:46.819 [2024-11-04 02:28:33.751756] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.389 ms, result 0 00:17:46.819 [2024-11-04 02:28:33.752354] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:46.819 [2024-11-04 02:28:33.767173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:47.763  [2024-11-04T02:28:35.819Z] Copying: 21/256 [MB] (21 MBps) [2024-11-04T02:28:37.209Z] Copying: 40/256 [MB] (18 MBps) [2024-11-04T02:28:37.782Z] Copying: 61/256 [MB] (21 MBps) [2024-11-04T02:28:39.168Z] Copying: 80/256 [MB] (19 MBps) [2024-11-04T02:28:40.110Z] Copying: 102/256 [MB] (21 MBps) [2024-11-04T02:28:41.056Z] Copying: 124/256 [MB] (22 MBps) [2024-11-04T02:28:42.001Z] Copying: 147/256 [MB] (22 MBps) [2024-11-04T02:28:42.946Z] Copying: 165/256 [MB] (18 MBps) [2024-11-04T02:28:43.892Z] Copying: 182/256 [MB] (16 MBps) [2024-11-04T02:28:44.834Z] Copying: 204/256 [MB] (21 MBps) [2024-11-04T02:28:45.778Z] Copying: 223/256 [MB] (19 MBps) [2024-11-04T02:28:47.164Z] Copying: 244/256 [MB] (21 MBps) [2024-11-04T02:28:47.164Z] Copying: 255/256 [MB] (10 MBps) [2024-11-04T02:28:47.164Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-04 02:28:46.831669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:00.053 [2024-11-04 02:28:46.841947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:46.842001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:00.053 [2024-11-04 02:28:46.842018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:00.053 [2024-11-04 02:28:46.842027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:46.842053] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:00.053 [2024-11-04 02:28:46.845107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:46.845159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:00.053 [2024-11-04 02:28:46.845171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.038 ms 00:18:00.053 [2024-11-04 02:28:46.845180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:46.845445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:46.845456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:00.053 [2024-11-04 02:28:46.845466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:18:00.053 [2024-11-04 02:28:46.845474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:46.849193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:46.849217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:00.053 [2024-11-04 02:28:46.849231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.704 ms 00:18:00.053 [2024-11-04 02:28:46.849240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:46.856199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:46.856237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:00.053 [2024-11-04 02:28:46.856247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.940 ms 00:18:00.053 [2024-11-04 02:28:46.856256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:46.881649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:46.881697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:00.053 [2024-11-04 02:28:46.881709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.323 ms 00:18:00.053 [2024-11-04 02:28:46.881717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:46.905774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:46.905848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:00.053 [2024-11-04 02:28:46.905898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.001 ms 00:18:00.053 [2024-11-04 02:28:46.905909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:46.906116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:46.906131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:00.053 [2024-11-04 02:28:46.906143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:00.053 [2024-11-04 02:28:46.906152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:46.932683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:46.932735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:00.053 [2024-11-04 02:28:46.932748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.497 ms 00:18:00.053 [2024-11-04 02:28:46.932756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:46.958188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:46.958237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:00.053 [2024-11-04 02:28:46.958250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.360 ms 00:18:00.053 [2024-11-04 02:28:46.958258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:46.982649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:46.982692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:00.053 [2024-11-04 02:28:46.982705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.341 ms 00:18:00.053 [2024-11-04 02:28:46.982713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:47.007212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.053 [2024-11-04 02:28:47.007259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:00.053 [2024-11-04 02:28:47.007270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.412 ms 00:18:00.053 [2024-11-04 02:28:47.007278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.053 [2024-11-04 02:28:47.007324] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:00.053 [2024-11-04 02:28:47.007349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:00.053 [2024-11-04 02:28:47.007361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:00.053 [2024-11-04 02:28:47.007370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:00.053 [2024-11-04 02:28:47.007380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:00.053 [2024-11-04 02:28:47.007388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:00.053 [2024-11-04 02:28:47.007397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:00.053 [2024-11-04 02:28:47.007405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:00.053 [2024-11-04 02:28:47.007414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:00.053 [2024-11-04 02:28:47.007423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.007992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:00.054 [2024-11-04 02:28:47.008250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:00.055 [2024-11-04 02:28:47.008259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:00.055 [2024-11-04 02:28:47.008268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:00.055 [2024-11-04 02:28:47.008286] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:00.055 [2024-11-04 02:28:47.008297] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6f3f3b2-f208-412f-922b-b47e9684c98b 00:18:00.055 [2024-11-04 02:28:47.008307] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:00.055 [2024-11-04 02:28:47.008316] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:00.055 [2024-11-04 02:28:47.008323] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:00.055 [2024-11-04 02:28:47.008332] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:00.055 [2024-11-04 02:28:47.008340] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:00.055 [2024-11-04 02:28:47.008349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:00.055 [2024-11-04 02:28:47.008357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:00.055 [2024-11-04 02:28:47.008364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:00.055 [2024-11-04 02:28:47.008371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:00.055 [2024-11-04 02:28:47.008379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.055 [2024-11-04 02:28:47.008388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:00.055 [2024-11-04 02:28:47.008397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:18:00.055 [2024-11-04 02:28:47.008407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.055 [2024-11-04 02:28:47.023185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.055 [2024-11-04 02:28:47.023227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:00.055 [2024-11-04 02:28:47.023240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.741 ms 00:18:00.055 [2024-11-04 02:28:47.023249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.055 [2024-11-04 02:28:47.023686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.055 [2024-11-04 02:28:47.023727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:00.055 [2024-11-04 02:28:47.023739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:18:00.055 [2024-11-04 02:28:47.023756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.055 [2024-11-04 02:28:47.065618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.055 [2024-11-04 02:28:47.065650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:00.055 [2024-11-04 02:28:47.065660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.055 [2024-11-04 02:28:47.065667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.055 [2024-11-04 02:28:47.065764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.055 [2024-11-04 02:28:47.065776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:00.055 [2024-11-04 02:28:47.065784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.055 [2024-11-04 02:28:47.065792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.055 [2024-11-04 02:28:47.065832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.055 [2024-11-04 02:28:47.065842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:00.055 [2024-11-04 02:28:47.065849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.055 [2024-11-04 02:28:47.065857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.055 [2024-11-04 02:28:47.065890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.055 [2024-11-04 02:28:47.065898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:00.055 [2024-11-04 02:28:47.065909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.055 [2024-11-04 02:28:47.065917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.055 [2024-11-04 02:28:47.145663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.055 [2024-11-04 02:28:47.145851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:00.055 [2024-11-04 02:28:47.145889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.055 [2024-11-04 02:28:47.145898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.316 [2024-11-04 02:28:47.211824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.316 [2024-11-04 02:28:47.211878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:00.316 [2024-11-04 02:28:47.211893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.316 [2024-11-04 02:28:47.211901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.316 [2024-11-04 02:28:47.211972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.316 [2024-11-04 02:28:47.211982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.316 [2024-11-04 02:28:47.211991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.316 [2024-11-04 02:28:47.211999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.316 [2024-11-04 02:28:47.212030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.316 [2024-11-04 02:28:47.212039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.316 [2024-11-04 02:28:47.212048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.316 [2024-11-04 02:28:47.212073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.316 [2024-11-04 02:28:47.212169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.316 [2024-11-04 02:28:47.212181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.316 [2024-11-04 02:28:47.212190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.316 [2024-11-04 02:28:47.212198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.316 [2024-11-04 02:28:47.212234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.316 [2024-11-04 02:28:47.212243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:00.316 [2024-11-04 02:28:47.212251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.316 [2024-11-04 02:28:47.212259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.316 [2024-11-04 02:28:47.212302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.316 [2024-11-04 02:28:47.212311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.316 [2024-11-04 02:28:47.212321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.316 [2024-11-04 02:28:47.212329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.316 [2024-11-04 02:28:47.212377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.316 [2024-11-04 02:28:47.212388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.316 [2024-11-04 02:28:47.212397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.316 [2024-11-04 02:28:47.212405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.316 [2024-11-04 02:28:47.212558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.624 ms, result 0 00:18:00.888 00:18:00.888 00:18:00.888 02:28:47 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:00.888 02:28:47 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:01.831 02:28:48 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:01.831 [2024-11-04 02:28:48.648941] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:18:01.831 [2024-11-04 02:28:48.649074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74157 ] 00:18:01.831 [2024-11-04 02:28:48.812123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.092 [2024-11-04 02:28:48.957170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.353 [2024-11-04 02:28:49.286657] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:02.353 [2024-11-04 02:28:49.286955] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:02.353 [2024-11-04 02:28:49.451666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.353 [2024-11-04 02:28:49.451755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:02.353 [2024-11-04 02:28:49.451774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:02.353 [2024-11-04 02:28:49.451784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.353 [2024-11-04 02:28:49.454938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.353 [2024-11-04 02:28:49.454986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.353 [2024-11-04 02:28:49.454997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:18:02.353 [2024-11-04 02:28:49.455006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.353 [2024-11-04 02:28:49.455135] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:02.353 [2024-11-04 02:28:49.456083] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:02.353 [2024-11-04 02:28:49.456135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.353 [2024-11-04 02:28:49.456145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.353 [2024-11-04 02:28:49.456156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:18:02.353 [2024-11-04 02:28:49.456164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.353 [2024-11-04 02:28:49.458483] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:02.615 [2024-11-04 02:28:49.473674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.615 [2024-11-04 02:28:49.473858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:02.615 [2024-11-04 02:28:49.474175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.193 ms 00:18:02.615 [2024-11-04 02:28:49.474197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.615 [2024-11-04 02:28:49.474315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.615 [2024-11-04 02:28:49.474330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:02.615 [2024-11-04 02:28:49.474340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:02.615 [2024-11-04 02:28:49.474348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.615 [2024-11-04 02:28:49.485737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.615 [2024-11-04 02:28:49.485943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.615 [2024-11-04 02:28:49.485971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.338 ms 00:18:02.615 [2024-11-04 02:28:49.485980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.615 [2024-11-04 02:28:49.486114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.615 [2024-11-04 02:28:49.486127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.615 [2024-11-04 02:28:49.486137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:02.615 [2024-11-04 02:28:49.486148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.615 [2024-11-04 02:28:49.486177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.615 [2024-11-04 02:28:49.486186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:02.615 [2024-11-04 02:28:49.486201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:02.615 [2024-11-04 02:28:49.486211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.615 [2024-11-04 02:28:49.486236] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:02.615 [2024-11-04 02:28:49.490985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.615 [2024-11-04 02:28:49.491026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.615 [2024-11-04 02:28:49.491037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.758 ms 00:18:02.615 [2024-11-04 02:28:49.491047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.615 [2024-11-04 02:28:49.491106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.615 [2024-11-04 02:28:49.491116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:02.615 [2024-11-04 02:28:49.491125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:02.615 [2024-11-04 02:28:49.491134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.615 [2024-11-04 02:28:49.491155] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:02.615 [2024-11-04 02:28:49.491181] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:02.615 [2024-11-04 02:28:49.491227] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:02.615 [2024-11-04 02:28:49.491244] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:02.615 [2024-11-04 02:28:49.491357] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:02.615 [2024-11-04 02:28:49.491371] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:02.615 [2024-11-04 02:28:49.491382] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:02.615 [2024-11-04 02:28:49.491394] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:02.615 [2024-11-04 02:28:49.491405] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:02.615 [2024-11-04 02:28:49.491418] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:02.615 [2024-11-04 02:28:49.491426] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:02.615 [2024-11-04 02:28:49.491438] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:02.615 [2024-11-04 02:28:49.491447] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:02.615 [2024-11-04 02:28:49.491456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.615 [2024-11-04 02:28:49.491465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:02.615 [2024-11-04 02:28:49.491475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:18:02.615 [2024-11-04 02:28:49.491486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.615 [2024-11-04 02:28:49.491575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.615 [2024-11-04 02:28:49.491585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:02.615 [2024-11-04 02:28:49.491595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:02.615 [2024-11-04 02:28:49.491607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.615 [2024-11-04 02:28:49.491707] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:02.615 [2024-11-04 02:28:49.491733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:02.615 [2024-11-04 02:28:49.491743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.615 [2024-11-04 02:28:49.491752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.615 [2024-11-04 02:28:49.491760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:02.615 [2024-11-04 02:28:49.491768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:02.615 [2024-11-04 02:28:49.491775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:02.615 [2024-11-04 02:28:49.491785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:02.615 [2024-11-04 02:28:49.491794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:02.615 [2024-11-04 02:28:49.491802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.615 [2024-11-04 02:28:49.491810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:02.615 [2024-11-04 02:28:49.491817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:02.615 [2024-11-04 02:28:49.491824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.615 [2024-11-04 02:28:49.491841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:02.615 [2024-11-04 02:28:49.491850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:02.615 [2024-11-04 02:28:49.491860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.615 [2024-11-04 02:28:49.491896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:02.615 [2024-11-04 02:28:49.491905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:02.615 [2024-11-04 02:28:49.491912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.615 [2024-11-04 02:28:49.491920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:02.615 [2024-11-04 02:28:49.491930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:02.615 [2024-11-04 02:28:49.491938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.615 [2024-11-04 02:28:49.491945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:02.615 [2024-11-04 02:28:49.491952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:02.615 [2024-11-04 02:28:49.491959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.615 [2024-11-04 02:28:49.491966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:02.615 [2024-11-04 02:28:49.491973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:02.615 [2024-11-04 02:28:49.491980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.615 [2024-11-04 02:28:49.491990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:02.615 [2024-11-04 02:28:49.491998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:02.615 [2024-11-04 02:28:49.492004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.615 [2024-11-04 02:28:49.492011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:02.615 [2024-11-04 02:28:49.492019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:02.615 [2024-11-04 02:28:49.492026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.615 [2024-11-04 02:28:49.492034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:02.616 [2024-11-04 02:28:49.492041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:02.616 [2024-11-04 02:28:49.492050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.616 [2024-11-04 02:28:49.492059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:02.616 [2024-11-04 02:28:49.492066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:02.616 [2024-11-04 02:28:49.492074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.616 [2024-11-04 02:28:49.492081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:02.616 [2024-11-04 02:28:49.492090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:02.616 [2024-11-04 02:28:49.492099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.616 [2024-11-04 02:28:49.492106] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:02.616 [2024-11-04 02:28:49.492116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:02.616 [2024-11-04 02:28:49.492124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.616 [2024-11-04 02:28:49.492133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.616 [2024-11-04 02:28:49.492145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:02.616 [2024-11-04 02:28:49.492152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:02.616 [2024-11-04 02:28:49.492161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:02.616 [2024-11-04 02:28:49.492168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:02.616 [2024-11-04 02:28:49.492175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:02.616 [2024-11-04 02:28:49.492182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:02.616 [2024-11-04 02:28:49.492190] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:02.616 [2024-11-04 02:28:49.492200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.616 [2024-11-04 02:28:49.492209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:02.616 [2024-11-04 02:28:49.492216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:02.616 [2024-11-04 02:28:49.492224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:02.616 [2024-11-04 02:28:49.492232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:02.616 [2024-11-04 02:28:49.492239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:02.616 [2024-11-04 02:28:49.492246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:02.616 [2024-11-04 02:28:49.492254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:02.616 [2024-11-04 02:28:49.492262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:02.616 [2024-11-04 02:28:49.492270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:02.616 [2024-11-04 02:28:49.492279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:02.616 [2024-11-04 02:28:49.492286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:02.616 [2024-11-04 02:28:49.492293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:02.616 [2024-11-04 02:28:49.492300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:02.616 [2024-11-04 02:28:49.492311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:02.616 [2024-11-04 02:28:49.492318] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:02.616 [2024-11-04 02:28:49.492327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.616 [2024-11-04 02:28:49.492338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:02.616 [2024-11-04 02:28:49.492346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:02.616 [2024-11-04 02:28:49.492354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:02.616 [2024-11-04 02:28:49.492362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:02.616 [2024-11-04 02:28:49.492370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.492378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:02.616 [2024-11-04 02:28:49.492387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:18:02.616 [2024-11-04 02:28:49.492398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.530744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.530950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.616 [2024-11-04 02:28:49.531185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.289 ms 00:18:02.616 [2024-11-04 02:28:49.531230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.531395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.531597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:02.616 [2024-11-04 02:28:49.531660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:02.616 [2024-11-04 02:28:49.531684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.583291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.583483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.616 [2024-11-04 02:28:49.583552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.550 ms 00:18:02.616 [2024-11-04 02:28:49.583579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.583753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.583789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:02.616 [2024-11-04 02:28:49.583813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:02.616 [2024-11-04 02:28:49.583835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.584556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.584702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:02.616 [2024-11-04 02:28:49.584761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:18:02.616 [2024-11-04 02:28:49.584785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.585001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.585111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:02.616 [2024-11-04 02:28:49.585178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:18:02.616 [2024-11-04 02:28:49.585203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.604166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.604323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:02.616 [2024-11-04 02:28:49.604382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.921 ms 00:18:02.616 [2024-11-04 02:28:49.604408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.619942] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:02.616 [2024-11-04 02:28:49.620112] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:02.616 [2024-11-04 02:28:49.620178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.620200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:02.616 [2024-11-04 02:28:49.620222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.623 ms 00:18:02.616 [2024-11-04 02:28:49.620240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.646820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.647007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:02.616 [2024-11-04 02:28:49.647075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.484 ms 00:18:02.616 [2024-11-04 02:28:49.647103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.660601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.660798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:02.616 [2024-11-04 02:28:49.660878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.126 ms 00:18:02.616 [2024-11-04 02:28:49.660905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.673331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.673497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:02.616 [2024-11-04 02:28:49.673557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.333 ms 00:18:02.616 [2024-11-04 02:28:49.673580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.616 [2024-11-04 02:28:49.674592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.616 [2024-11-04 02:28:49.674778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:02.616 [2024-11-04 02:28:49.674853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:18:02.616 [2024-11-04 02:28:49.675219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.878 [2024-11-04 02:28:49.748680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.878 [2024-11-04 02:28:49.748737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:02.878 [2024-11-04 02:28:49.748755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.307 ms 00:18:02.878 [2024-11-04 02:28:49.748766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.878 [2024-11-04 02:28:49.760950] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:02.878 [2024-11-04 02:28:49.785601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.878 [2024-11-04 02:28:49.785655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:02.878 [2024-11-04 02:28:49.785672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.711 ms 00:18:02.878 [2024-11-04 02:28:49.785683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.878 [2024-11-04 02:28:49.785800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.878 [2024-11-04 02:28:49.785812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:02.878 [2024-11-04 02:28:49.785823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:02.878 [2024-11-04 02:28:49.785832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.878 [2024-11-04 02:28:49.785929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.878 [2024-11-04 02:28:49.785942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:02.878 [2024-11-04 02:28:49.785952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:02.878 [2024-11-04 02:28:49.785960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.878 [2024-11-04 02:28:49.785994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.878 [2024-11-04 02:28:49.786008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:02.878 [2024-11-04 02:28:49.786018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:02.878 [2024-11-04 02:28:49.786026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.878 [2024-11-04 02:28:49.786083] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:02.878 [2024-11-04 02:28:49.786098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.878 [2024-11-04 02:28:49.786107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:02.878 [2024-11-04 02:28:49.786117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:02.878 [2024-11-04 02:28:49.786127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.878 [2024-11-04 02:28:49.812646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.878 [2024-11-04 02:28:49.812698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:02.878 [2024-11-04 02:28:49.812713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.491 ms 00:18:02.878 [2024-11-04 02:28:49.812723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.878 [2024-11-04 02:28:49.812849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.878 [2024-11-04 02:28:49.812861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:02.878 [2024-11-04 02:28:49.812893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:02.878 [2024-11-04 02:28:49.812905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.878 [2024-11-04 02:28:49.814235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:02.878 [2024-11-04 02:28:49.817787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 362.146 ms, result 0 00:18:02.878 [2024-11-04 02:28:49.819157] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:02.878 [2024-11-04 02:28:49.832791] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:03.139  [2024-11-04T02:28:50.250Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-04 02:28:50.225065] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:03.139 [2024-11-04 02:28:50.235928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.139 [2024-11-04 02:28:50.236129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:03.139 [2024-11-04 02:28:50.236156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:03.139 [2024-11-04 02:28:50.236168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.139 [2024-11-04 02:28:50.236210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:03.139 [2024-11-04 02:28:50.239510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.139 [2024-11-04 02:28:50.239684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:03.139 [2024-11-04 02:28:50.239706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:18:03.139 [2024-11-04 02:28:50.239728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.139 [2024-11-04 02:28:50.242979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.139 [2024-11-04 02:28:50.243026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:03.139 [2024-11-04 02:28:50.243039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:18:03.139 [2024-11-04 02:28:50.243049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.139 [2024-11-04 02:28:50.247569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.139 [2024-11-04 02:28:50.247610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:03.139 [2024-11-04 02:28:50.247630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.502 ms 00:18:03.139 [2024-11-04 02:28:50.247639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.401 [2024-11-04 02:28:50.254842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.401 [2024-11-04 02:28:50.255022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:03.401 [2024-11-04 02:28:50.255043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.167 ms 00:18:03.401 [2024-11-04 02:28:50.255052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.401 [2024-11-04 02:28:50.280984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.401 [2024-11-04 02:28:50.281029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:03.401 [2024-11-04 02:28:50.281042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.871 ms 00:18:03.401 [2024-11-04 02:28:50.281051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.401 [2024-11-04 02:28:50.298055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.401 [2024-11-04 02:28:50.298241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:03.401 [2024-11-04 02:28:50.298263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.932 ms 00:18:03.402 [2024-11-04 02:28:50.298279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.402 [2024-11-04 02:28:50.298847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.402 [2024-11-04 02:28:50.298924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:03.402 [2024-11-04 02:28:50.298938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:03.402 [2024-11-04 02:28:50.298947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.402 [2024-11-04 02:28:50.325200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.402 [2024-11-04 02:28:50.325413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:03.402 [2024-11-04 02:28:50.325436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.220 ms 00:18:03.402 [2024-11-04 02:28:50.325445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.402 [2024-11-04 02:28:50.350587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.402 [2024-11-04 02:28:50.350630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:03.402 [2024-11-04 02:28:50.350641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.085 ms 00:18:03.402 [2024-11-04 02:28:50.350649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.402 [2024-11-04 02:28:50.375447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.402 [2024-11-04 02:28:50.375486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:03.402 [2024-11-04 02:28:50.375498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.749 ms 00:18:03.402 [2024-11-04 02:28:50.375505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.402 [2024-11-04 02:28:50.400206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.402 [2024-11-04 02:28:50.400248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:03.402 [2024-11-04 02:28:50.400259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.620 ms 00:18:03.402 [2024-11-04 02:28:50.400266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.402 [2024-11-04 02:28:50.400313] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:03.402 [2024-11-04 02:28:50.400330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:03.402 [2024-11-04 02:28:50.400933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.400941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.400949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.400957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.400966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.400974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.400981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.400989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.400996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:03.403 [2024-11-04 02:28:50.401208] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:03.403 [2024-11-04 02:28:50.401218] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6f3f3b2-f208-412f-922b-b47e9684c98b 00:18:03.403 [2024-11-04 02:28:50.401227] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:03.403 [2024-11-04 02:28:50.401235] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:03.403 [2024-11-04 02:28:50.401242] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:03.403 [2024-11-04 02:28:50.401250] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:03.403 [2024-11-04 02:28:50.401260] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:03.403 [2024-11-04 02:28:50.401269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:03.403 [2024-11-04 02:28:50.401276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:03.403 [2024-11-04 02:28:50.401283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:03.403 [2024-11-04 02:28:50.401289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:03.403 [2024-11-04 02:28:50.401296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.403 [2024-11-04 02:28:50.401308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:03.403 [2024-11-04 02:28:50.401317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:18:03.403 [2024-11-04 02:28:50.401327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.403 [2024-11-04 02:28:50.415616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.403 [2024-11-04 02:28:50.415797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:03.403 [2024-11-04 02:28:50.415816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.256 ms 00:18:03.403 [2024-11-04 02:28:50.415825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.403 [2024-11-04 02:28:50.416306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.403 [2024-11-04 02:28:50.416321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:03.403 [2024-11-04 02:28:50.416332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:18:03.403 [2024-11-04 02:28:50.416341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.403 [2024-11-04 02:28:50.458125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.403 [2024-11-04 02:28:50.458305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.403 [2024-11-04 02:28:50.458325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.403 [2024-11-04 02:28:50.458335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.403 [2024-11-04 02:28:50.458439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.403 [2024-11-04 02:28:50.458451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.403 [2024-11-04 02:28:50.458459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.403 [2024-11-04 02:28:50.458469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.403 [2024-11-04 02:28:50.458526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.403 [2024-11-04 02:28:50.458537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.403 [2024-11-04 02:28:50.458545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.403 [2024-11-04 02:28:50.458554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.403 [2024-11-04 02:28:50.458576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.403 [2024-11-04 02:28:50.458592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.403 [2024-11-04 02:28:50.458602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.403 [2024-11-04 02:28:50.458610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.664 [2024-11-04 02:28:50.549320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.664 [2024-11-04 02:28:50.549550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.664 [2024-11-04 02:28:50.549572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.664 [2024-11-04 02:28:50.549582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.664 [2024-11-04 02:28:50.623016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.664 [2024-11-04 02:28:50.623083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:03.664 [2024-11-04 02:28:50.623098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.664 [2024-11-04 02:28:50.623107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.664 [2024-11-04 02:28:50.623185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.664 [2024-11-04 02:28:50.623196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:03.664 [2024-11-04 02:28:50.623206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.664 [2024-11-04 02:28:50.623214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.664 [2024-11-04 02:28:50.623251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.664 [2024-11-04 02:28:50.623261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:03.664 [2024-11-04 02:28:50.623276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.664 [2024-11-04 02:28:50.623285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.664 [2024-11-04 02:28:50.623396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.664 [2024-11-04 02:28:50.623408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:03.664 [2024-11-04 02:28:50.623420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.664 [2024-11-04 02:28:50.623429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.664 [2024-11-04 02:28:50.623465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.664 [2024-11-04 02:28:50.623476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:03.664 [2024-11-04 02:28:50.623485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.664 [2024-11-04 02:28:50.623501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.664 [2024-11-04 02:28:50.623556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.664 [2024-11-04 02:28:50.623568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:03.664 [2024-11-04 02:28:50.623578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.664 [2024-11-04 02:28:50.623587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.664 [2024-11-04 02:28:50.623655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.664 [2024-11-04 02:28:50.623669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:03.664 [2024-11-04 02:28:50.623683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.664 [2024-11-04 02:28:50.623692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.664 [2024-11-04 02:28:50.623924] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 387.945 ms, result 0 00:18:04.608 00:18:04.608 00:18:04.608 02:28:51 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74193 00:18:04.608 02:28:51 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74193 00:18:04.608 02:28:51 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 74193 ']' 00:18:04.608 02:28:51 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:04.608 02:28:51 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.608 02:28:51 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:04.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.608 02:28:51 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.608 02:28:51 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:04.608 02:28:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:04.608 [2024-11-04 02:28:51.567805] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:18:04.608 [2024-11-04 02:28:51.567995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74193 ] 00:18:04.868 [2024-11-04 02:28:51.734971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.868 [2024-11-04 02:28:51.874736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.852 02:28:52 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:05.852 02:28:52 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:18:05.852 02:28:52 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:05.852 [2024-11-04 02:28:52.879329] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:05.852 [2024-11-04 02:28:52.879655] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:06.113 [2024-11-04 02:28:53.061435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.061497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:06.113 [2024-11-04 02:28:53.061517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:06.113 [2024-11-04 02:28:53.061528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.064757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.064811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.113 [2024-11-04 02:28:53.064825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:18:06.113 [2024-11-04 02:28:53.064834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.064983] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:06.113 [2024-11-04 02:28:53.065768] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:06.113 [2024-11-04 02:28:53.065810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.065819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.113 [2024-11-04 02:28:53.065831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:18:06.113 [2024-11-04 02:28:53.065840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.068176] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:06.113 [2024-11-04 02:28:53.083575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.083876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:06.113 [2024-11-04 02:28:53.083902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.406 ms 00:18:06.113 [2024-11-04 02:28:53.083915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.084030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.084046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:06.113 [2024-11-04 02:28:53.084056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:06.113 [2024-11-04 02:28:53.084066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.095385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.095437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.113 [2024-11-04 02:28:53.095448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.265 ms 00:18:06.113 [2024-11-04 02:28:53.095459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.095606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.095623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.113 [2024-11-04 02:28:53.095632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:06.113 [2024-11-04 02:28:53.095642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.095670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.095685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:06.113 [2024-11-04 02:28:53.095693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:06.113 [2024-11-04 02:28:53.095704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.095761] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:06.113 [2024-11-04 02:28:53.100256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.100296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.113 [2024-11-04 02:28:53.100310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.498 ms 00:18:06.113 [2024-11-04 02:28:53.100318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.100384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.100393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:06.113 [2024-11-04 02:28:53.100406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:06.113 [2024-11-04 02:28:53.100414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.100439] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:06.113 [2024-11-04 02:28:53.100468] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:06.113 [2024-11-04 02:28:53.100519] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:06.113 [2024-11-04 02:28:53.100536] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:06.113 [2024-11-04 02:28:53.100656] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:06.113 [2024-11-04 02:28:53.100670] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:06.113 [2024-11-04 02:28:53.100684] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:06.113 [2024-11-04 02:28:53.100695] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:06.113 [2024-11-04 02:28:53.100710] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:06.113 [2024-11-04 02:28:53.100719] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:06.113 [2024-11-04 02:28:53.100731] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:06.113 [2024-11-04 02:28:53.100740] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:06.113 [2024-11-04 02:28:53.100753] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:06.113 [2024-11-04 02:28:53.100762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.100774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:06.113 [2024-11-04 02:28:53.100783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:18:06.113 [2024-11-04 02:28:53.100793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.100905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.100918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:06.113 [2024-11-04 02:28:53.100930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:06.113 [2024-11-04 02:28:53.100942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.101044] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:06.113 [2024-11-04 02:28:53.101059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:06.113 [2024-11-04 02:28:53.101067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.113 [2024-11-04 02:28:53.101078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:06.113 [2024-11-04 02:28:53.101097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:06.113 [2024-11-04 02:28:53.101122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:06.113 [2024-11-04 02:28:53.101131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.113 [2024-11-04 02:28:53.101148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:06.113 [2024-11-04 02:28:53.101158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:06.113 [2024-11-04 02:28:53.101165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.113 [2024-11-04 02:28:53.101176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:06.113 [2024-11-04 02:28:53.101183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:06.113 [2024-11-04 02:28:53.101193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:06.113 [2024-11-04 02:28:53.101215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:06.113 [2024-11-04 02:28:53.101224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:06.113 [2024-11-04 02:28:53.101248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.113 [2024-11-04 02:28:53.101265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:06.113 [2024-11-04 02:28:53.101277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.113 [2024-11-04 02:28:53.101293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:06.113 [2024-11-04 02:28:53.101299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.113 [2024-11-04 02:28:53.101316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:06.113 [2024-11-04 02:28:53.101324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.113 [2024-11-04 02:28:53.101342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:06.113 [2024-11-04 02:28:53.101349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.113 [2024-11-04 02:28:53.101366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:06.113 [2024-11-04 02:28:53.101375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:06.113 [2024-11-04 02:28:53.101382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.113 [2024-11-04 02:28:53.101390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:06.113 [2024-11-04 02:28:53.101398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:06.113 [2024-11-04 02:28:53.101410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:06.113 [2024-11-04 02:28:53.101427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:06.113 [2024-11-04 02:28:53.101435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101445] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:06.113 [2024-11-04 02:28:53.101454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:06.113 [2024-11-04 02:28:53.101466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.113 [2024-11-04 02:28:53.101476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-11-04 02:28:53.101486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:06.113 [2024-11-04 02:28:53.101493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:06.113 [2024-11-04 02:28:53.101503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:06.113 [2024-11-04 02:28:53.101511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:06.113 [2024-11-04 02:28:53.101520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:06.113 [2024-11-04 02:28:53.101526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:06.113 [2024-11-04 02:28:53.101537] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:06.113 [2024-11-04 02:28:53.101547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.113 [2024-11-04 02:28:53.101564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:06.113 [2024-11-04 02:28:53.101572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:06.113 [2024-11-04 02:28:53.101581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:06.113 [2024-11-04 02:28:53.101588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:06.113 [2024-11-04 02:28:53.101599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:06.113 [2024-11-04 02:28:53.101606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:06.113 [2024-11-04 02:28:53.101615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:06.113 [2024-11-04 02:28:53.101622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:06.113 [2024-11-04 02:28:53.101633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:06.113 [2024-11-04 02:28:53.101641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:06.113 [2024-11-04 02:28:53.101650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:06.113 [2024-11-04 02:28:53.101658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:06.113 [2024-11-04 02:28:53.101666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:06.113 [2024-11-04 02:28:53.101673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:06.113 [2024-11-04 02:28:53.101683] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:06.113 [2024-11-04 02:28:53.101693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.113 [2024-11-04 02:28:53.101707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:06.113 [2024-11-04 02:28:53.101714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:06.113 [2024-11-04 02:28:53.101723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:06.113 [2024-11-04 02:28:53.101732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:06.113 [2024-11-04 02:28:53.101742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.101751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:06.113 [2024-11-04 02:28:53.101763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:18:06.113 [2024-11-04 02:28:53.101771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.140073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-11-04 02:28:53.140123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.113 [2024-11-04 02:28:53.140138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.238 ms 00:18:06.113 [2024-11-04 02:28:53.140147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-11-04 02:28:53.140288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.114 [2024-11-04 02:28:53.140302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:06.114 [2024-11-04 02:28:53.140316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:06.114 [2024-11-04 02:28:53.140324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.114 [2024-11-04 02:28:53.179743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.114 [2024-11-04 02:28:53.179791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.114 [2024-11-04 02:28:53.179807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.389 ms 00:18:06.114 [2024-11-04 02:28:53.179819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.114 [2024-11-04 02:28:53.179933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.114 [2024-11-04 02:28:53.179945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.114 [2024-11-04 02:28:53.179958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:06.114 [2024-11-04 02:28:53.179967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.114 [2024-11-04 02:28:53.180669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.114 [2024-11-04 02:28:53.180702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.114 [2024-11-04 02:28:53.180717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:18:06.114 [2024-11-04 02:28:53.180729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.114 [2024-11-04 02:28:53.180932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.114 [2024-11-04 02:28:53.180945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.114 [2024-11-04 02:28:53.180958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:18:06.114 [2024-11-04 02:28:53.180969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.114 [2024-11-04 02:28:53.201933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.114 [2024-11-04 02:28:53.201977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.114 [2024-11-04 02:28:53.201993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.932 ms 00:18:06.114 [2024-11-04 02:28:53.202002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.114 [2024-11-04 02:28:53.217287] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:06.114 [2024-11-04 02:28:53.217334] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:06.114 [2024-11-04 02:28:53.217351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.114 [2024-11-04 02:28:53.217361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:06.114 [2024-11-04 02:28:53.217373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.229 ms 00:18:06.114 [2024-11-04 02:28:53.217381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.375 [2024-11-04 02:28:53.243684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.375 [2024-11-04 02:28:53.243989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:06.375 [2024-11-04 02:28:53.244017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.207 ms 00:18:06.375 [2024-11-04 02:28:53.244027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.375 [2024-11-04 02:28:53.257267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.375 [2024-11-04 02:28:53.257311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:06.375 [2024-11-04 02:28:53.257330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.970 ms 00:18:06.375 [2024-11-04 02:28:53.257338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.375 [2024-11-04 02:28:53.270150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.375 [2024-11-04 02:28:53.270193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:06.375 [2024-11-04 02:28:53.270207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.721 ms 00:18:06.375 [2024-11-04 02:28:53.270215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.375 [2024-11-04 02:28:53.270901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.375 [2024-11-04 02:28:53.270929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:06.375 [2024-11-04 02:28:53.270944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:18:06.375 [2024-11-04 02:28:53.270953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.375 [2024-11-04 02:28:53.354755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.375 [2024-11-04 02:28:53.354816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:06.375 [2024-11-04 02:28:53.354837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.770 ms 00:18:06.375 [2024-11-04 02:28:53.354847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.375 [2024-11-04 02:28:53.368039] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:06.375 [2024-11-04 02:28:53.392649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.375 [2024-11-04 02:28:53.392711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:06.375 [2024-11-04 02:28:53.392726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.685 ms 00:18:06.375 [2024-11-04 02:28:53.392738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.375 [2024-11-04 02:28:53.392842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.375 [2024-11-04 02:28:53.392856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:06.375 [2024-11-04 02:28:53.392891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:06.375 [2024-11-04 02:28:53.392904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.375 [2024-11-04 02:28:53.392976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.375 [2024-11-04 02:28:53.392993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:06.375 [2024-11-04 02:28:53.393003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:06.375 [2024-11-04 02:28:53.393015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.375 [2024-11-04 02:28:53.393046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.375 [2024-11-04 02:28:53.393060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:06.376 [2024-11-04 02:28:53.393070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:06.376 [2024-11-04 02:28:53.393084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.376 [2024-11-04 02:28:53.393127] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:06.376 [2024-11-04 02:28:53.393144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.376 [2024-11-04 02:28:53.393153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:06.376 [2024-11-04 02:28:53.393164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:06.376 [2024-11-04 02:28:53.393177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.376 [2024-11-04 02:28:53.420529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.376 [2024-11-04 02:28:53.420579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:06.376 [2024-11-04 02:28:53.420597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.321 ms 00:18:06.376 [2024-11-04 02:28:53.420607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.376 [2024-11-04 02:28:53.420744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.376 [2024-11-04 02:28:53.420757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:06.376 [2024-11-04 02:28:53.420773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:06.376 [2024-11-04 02:28:53.420781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.376 [2024-11-04 02:28:53.422103] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:06.376 [2024-11-04 02:28:53.425487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 360.242 ms, result 0 00:18:06.376 [2024-11-04 02:28:53.427599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:06.376 Some configs were skipped because the RPC state that can call them passed over. 00:18:06.376 02:28:53 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:06.637 [2024-11-04 02:28:53.668176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.637 [2024-11-04 02:28:53.668453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:06.637 [2024-11-04 02:28:53.668716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.135 ms 00:18:06.637 [2024-11-04 02:28:53.668765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.637 [2024-11-04 02:28:53.668831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.789 ms, result 0 00:18:06.637 true 00:18:06.637 02:28:53 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:06.898 [2024-11-04 02:28:53.884090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.898 [2024-11-04 02:28:53.884141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:06.898 [2024-11-04 02:28:53.884155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:18:06.898 [2024-11-04 02:28:53.884164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.898 [2024-11-04 02:28:53.884203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.876 ms, result 0 00:18:06.898 true 00:18:06.898 02:28:53 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74193 00:18:06.898 02:28:53 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74193 ']' 00:18:06.898 02:28:53 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74193 00:18:06.898 02:28:53 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:18:06.898 02:28:53 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:06.899 02:28:53 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74193 00:18:06.899 killing process with pid 74193 00:18:06.899 02:28:53 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:06.899 02:28:53 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:06.899 02:28:53 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74193' 00:18:06.899 02:28:53 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 74193 00:18:06.899 02:28:53 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 74193 00:18:07.845 [2024-11-04 02:28:54.606215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.606265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:07.845 [2024-11-04 02:28:54.606276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:07.845 [2024-11-04 02:28:54.606284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.606303] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:07.845 [2024-11-04 02:28:54.608546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.608572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:07.845 [2024-11-04 02:28:54.608586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:18:07.845 [2024-11-04 02:28:54.608592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.608825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.608833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:07.845 [2024-11-04 02:28:54.608842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:18:07.845 [2024-11-04 02:28:54.608849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.612648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.612675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:07.845 [2024-11-04 02:28:54.612684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.780 ms 00:18:07.845 [2024-11-04 02:28:54.612692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.617947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.617972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:07.845 [2024-11-04 02:28:54.617982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.225 ms 00:18:07.845 [2024-11-04 02:28:54.617989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.626131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.626155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:07.845 [2024-11-04 02:28:54.626166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.093 ms 00:18:07.845 [2024-11-04 02:28:54.626181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.633995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.634020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:07.845 [2024-11-04 02:28:54.634030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.776 ms 00:18:07.845 [2024-11-04 02:28:54.634038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.634155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.634163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:07.845 [2024-11-04 02:28:54.634171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:07.845 [2024-11-04 02:28:54.634178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.642545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.642568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:07.845 [2024-11-04 02:28:54.642577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.350 ms 00:18:07.845 [2024-11-04 02:28:54.642583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.650787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.650809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:07.845 [2024-11-04 02:28:54.650820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.174 ms 00:18:07.845 [2024-11-04 02:28:54.650826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.658357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.658379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:07.845 [2024-11-04 02:28:54.658388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.501 ms 00:18:07.845 [2024-11-04 02:28:54.658394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.665641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.845 [2024-11-04 02:28:54.665664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:07.845 [2024-11-04 02:28:54.665673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.197 ms 00:18:07.845 [2024-11-04 02:28:54.665678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.845 [2024-11-04 02:28:54.665705] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:07.845 [2024-11-04 02:28:54.665716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:07.845 [2024-11-04 02:28:54.665725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:07.845 [2024-11-04 02:28:54.665731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:07.845 [2024-11-04 02:28:54.665739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:07.845 [2024-11-04 02:28:54.665745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:07.845 [2024-11-04 02:28:54.665754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:07.845 [2024-11-04 02:28:54.665760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:07.845 [2024-11-04 02:28:54.665767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:07.845 [2024-11-04 02:28:54.665773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.665999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:07.846 [2024-11-04 02:28:54.666229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:07.847 [2024-11-04 02:28:54.666410] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:07.847 [2024-11-04 02:28:54.666419] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6f3f3b2-f208-412f-922b-b47e9684c98b 00:18:07.847 [2024-11-04 02:28:54.666433] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:07.847 [2024-11-04 02:28:54.666442] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:07.847 [2024-11-04 02:28:54.666451] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:07.847 [2024-11-04 02:28:54.666458] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:07.847 [2024-11-04 02:28:54.666463] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:07.847 [2024-11-04 02:28:54.666471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:07.847 [2024-11-04 02:28:54.666477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:07.847 [2024-11-04 02:28:54.666484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:07.847 [2024-11-04 02:28:54.666489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:07.847 [2024-11-04 02:28:54.666496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.847 [2024-11-04 02:28:54.666502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:07.847 [2024-11-04 02:28:54.666510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:18:07.847 [2024-11-04 02:28:54.666516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.847 [2024-11-04 02:28:54.676828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.847 [2024-11-04 02:28:54.677003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:07.847 [2024-11-04 02:28:54.677022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.284 ms 00:18:07.847 [2024-11-04 02:28:54.677029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.847 [2024-11-04 02:28:54.677342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.847 [2024-11-04 02:28:54.677355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:07.847 [2024-11-04 02:28:54.677364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:18:07.847 [2024-11-04 02:28:54.677370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.847 [2024-11-04 02:28:54.714067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.847 [2024-11-04 02:28:54.714094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:07.847 [2024-11-04 02:28:54.714104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.847 [2024-11-04 02:28:54.714111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.847 [2024-11-04 02:28:54.714195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.847 [2024-11-04 02:28:54.714202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:07.847 [2024-11-04 02:28:54.714210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.847 [2024-11-04 02:28:54.714216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.847 [2024-11-04 02:28:54.714254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.847 [2024-11-04 02:28:54.714263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:07.847 [2024-11-04 02:28:54.714274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.847 [2024-11-04 02:28:54.714280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.847 [2024-11-04 02:28:54.714294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.847 [2024-11-04 02:28:54.714301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:07.847 [2024-11-04 02:28:54.714309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.847 [2024-11-04 02:28:54.714315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.847 [2024-11-04 02:28:54.777107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.847 [2024-11-04 02:28:54.777139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:07.847 [2024-11-04 02:28:54.777150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.847 [2024-11-04 02:28:54.777157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.847 [2024-11-04 02:28:54.828199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.847 [2024-11-04 02:28:54.828378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:07.847 [2024-11-04 02:28:54.828395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.847 [2024-11-04 02:28:54.828402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.847 [2024-11-04 02:28:54.828476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.847 [2024-11-04 02:28:54.828486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:07.847 [2024-11-04 02:28:54.828496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.847 [2024-11-04 02:28:54.828503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.847 [2024-11-04 02:28:54.828531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.847 [2024-11-04 02:28:54.828537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:07.847 [2024-11-04 02:28:54.828546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.847 [2024-11-04 02:28:54.828553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.847 [2024-11-04 02:28:54.828637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.848 [2024-11-04 02:28:54.828646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:07.848 [2024-11-04 02:28:54.828656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.848 [2024-11-04 02:28:54.828663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.848 [2024-11-04 02:28:54.828692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.848 [2024-11-04 02:28:54.828700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:07.848 [2024-11-04 02:28:54.828708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.848 [2024-11-04 02:28:54.828714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.848 [2024-11-04 02:28:54.828752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.848 [2024-11-04 02:28:54.828760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:07.848 [2024-11-04 02:28:54.828771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.848 [2024-11-04 02:28:54.828777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.848 [2024-11-04 02:28:54.828823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.848 [2024-11-04 02:28:54.828831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:07.848 [2024-11-04 02:28:54.828839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.848 [2024-11-04 02:28:54.828845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.848 [2024-11-04 02:28:54.828992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 222.754 ms, result 0 00:18:08.419 02:28:55 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:08.420 [2024-11-04 02:28:55.439441] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:18:08.420 [2024-11-04 02:28:55.439695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74246 ] 00:18:08.680 [2024-11-04 02:28:55.597015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.680 [2024-11-04 02:28:55.685273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.942 [2024-11-04 02:28:55.913669] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:08.942 [2024-11-04 02:28:55.913719] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:09.205 [2024-11-04 02:28:56.071649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.205 [2024-11-04 02:28:56.071815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:09.205 [2024-11-04 02:28:56.071833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:09.205 [2024-11-04 02:28:56.071841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.205 [2024-11-04 02:28:56.074238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.205 [2024-11-04 02:28:56.074271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.205 [2024-11-04 02:28:56.074280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:18:09.205 [2024-11-04 02:28:56.074287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.205 [2024-11-04 02:28:56.074362] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:09.205 [2024-11-04 02:28:56.074993] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:09.205 [2024-11-04 02:28:56.075045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.205 [2024-11-04 02:28:56.075061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.205 [2024-11-04 02:28:56.075077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:18:09.205 [2024-11-04 02:28:56.075181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.205 [2024-11-04 02:28:56.076512] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:09.205 [2024-11-04 02:28:56.086645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.205 [2024-11-04 02:28:56.086746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:09.205 [2024-11-04 02:28:56.086797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.134 ms 00:18:09.205 [2024-11-04 02:28:56.086815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.205 [2024-11-04 02:28:56.086900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.205 [2024-11-04 02:28:56.086924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:09.205 [2024-11-04 02:28:56.086940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:09.205 [2024-11-04 02:28:56.086955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.205 [2024-11-04 02:28:56.093214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.205 [2024-11-04 02:28:56.093307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.205 [2024-11-04 02:28:56.093349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.216 ms 00:18:09.205 [2024-11-04 02:28:56.093366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.205 [2024-11-04 02:28:56.093450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.205 [2024-11-04 02:28:56.093470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.205 [2024-11-04 02:28:56.093486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:09.205 [2024-11-04 02:28:56.093501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.205 [2024-11-04 02:28:56.093530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.205 [2024-11-04 02:28:56.093547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:09.205 [2024-11-04 02:28:56.093604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:09.205 [2024-11-04 02:28:56.093613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.205 [2024-11-04 02:28:56.093633] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:09.205 [2024-11-04 02:28:56.096633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.205 [2024-11-04 02:28:56.096722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.205 [2024-11-04 02:28:56.096734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:18:09.205 [2024-11-04 02:28:56.096741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.205 [2024-11-04 02:28:56.096775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.205 [2024-11-04 02:28:56.096781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:09.205 [2024-11-04 02:28:56.096788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:09.205 [2024-11-04 02:28:56.096794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.206 [2024-11-04 02:28:56.096808] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:09.206 [2024-11-04 02:28:56.096838] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:09.206 [2024-11-04 02:28:56.096882] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:09.206 [2024-11-04 02:28:56.096894] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:09.206 [2024-11-04 02:28:56.096978] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:09.206 [2024-11-04 02:28:56.096987] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:09.206 [2024-11-04 02:28:56.096996] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:09.206 [2024-11-04 02:28:56.097004] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:09.206 [2024-11-04 02:28:56.097011] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:09.206 [2024-11-04 02:28:56.097019] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:09.206 [2024-11-04 02:28:56.097025] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:09.206 [2024-11-04 02:28:56.097031] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:09.206 [2024-11-04 02:28:56.097037] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:09.206 [2024-11-04 02:28:56.097043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.206 [2024-11-04 02:28:56.097050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:09.206 [2024-11-04 02:28:56.097056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:18:09.206 [2024-11-04 02:28:56.097062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.206 [2024-11-04 02:28:56.097139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.206 [2024-11-04 02:28:56.097147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:09.206 [2024-11-04 02:28:56.097153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:09.206 [2024-11-04 02:28:56.097162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.206 [2024-11-04 02:28:56.097238] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:09.206 [2024-11-04 02:28:56.097247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:09.206 [2024-11-04 02:28:56.097253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:09.206 [2024-11-04 02:28:56.097260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:09.206 [2024-11-04 02:28:56.097271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:09.206 [2024-11-04 02:28:56.097283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:09.206 [2024-11-04 02:28:56.097290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:09.206 [2024-11-04 02:28:56.097301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:09.206 [2024-11-04 02:28:56.097307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:09.206 [2024-11-04 02:28:56.097315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:09.206 [2024-11-04 02:28:56.097325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:09.206 [2024-11-04 02:28:56.097331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:09.206 [2024-11-04 02:28:56.097336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:09.206 [2024-11-04 02:28:56.097346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:09.206 [2024-11-04 02:28:56.097351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:09.206 [2024-11-04 02:28:56.097362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.206 [2024-11-04 02:28:56.097372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:09.206 [2024-11-04 02:28:56.097377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.206 [2024-11-04 02:28:56.097387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:09.206 [2024-11-04 02:28:56.097393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.206 [2024-11-04 02:28:56.097403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:09.206 [2024-11-04 02:28:56.097408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.206 [2024-11-04 02:28:56.097418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:09.206 [2024-11-04 02:28:56.097423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:09.206 [2024-11-04 02:28:56.097433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:09.206 [2024-11-04 02:28:56.097438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:09.206 [2024-11-04 02:28:56.097442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:09.206 [2024-11-04 02:28:56.097447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:09.206 [2024-11-04 02:28:56.097453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:09.206 [2024-11-04 02:28:56.097458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:09.206 [2024-11-04 02:28:56.097468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:09.206 [2024-11-04 02:28:56.097472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097478] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:09.206 [2024-11-04 02:28:56.097485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:09.206 [2024-11-04 02:28:56.097491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:09.206 [2024-11-04 02:28:56.097498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.206 [2024-11-04 02:28:56.097506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:09.206 [2024-11-04 02:28:56.097512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:09.206 [2024-11-04 02:28:56.097517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:09.206 [2024-11-04 02:28:56.097523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:09.206 [2024-11-04 02:28:56.097528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:09.206 [2024-11-04 02:28:56.097533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:09.206 [2024-11-04 02:28:56.097540] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:09.206 [2024-11-04 02:28:56.097547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:09.206 [2024-11-04 02:28:56.097554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:09.206 [2024-11-04 02:28:56.097559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:09.206 [2024-11-04 02:28:56.097564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:09.206 [2024-11-04 02:28:56.097570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:09.206 [2024-11-04 02:28:56.097576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:09.206 [2024-11-04 02:28:56.097581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:09.206 [2024-11-04 02:28:56.097586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:09.206 [2024-11-04 02:28:56.097592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:09.206 [2024-11-04 02:28:56.097597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:09.206 [2024-11-04 02:28:56.097603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:09.206 [2024-11-04 02:28:56.097609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:09.207 [2024-11-04 02:28:56.097614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:09.207 [2024-11-04 02:28:56.097620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:09.207 [2024-11-04 02:28:56.097625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:09.207 [2024-11-04 02:28:56.097630] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:09.207 [2024-11-04 02:28:56.097637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:09.207 [2024-11-04 02:28:56.097644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:09.207 [2024-11-04 02:28:56.097650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:09.207 [2024-11-04 02:28:56.097656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:09.207 [2024-11-04 02:28:56.097662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:09.207 [2024-11-04 02:28:56.097667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.097675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:09.207 [2024-11-04 02:28:56.097681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:18:09.207 [2024-11-04 02:28:56.097689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.121949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.121977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.207 [2024-11-04 02:28:56.121986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.210 ms 00:18:09.207 [2024-11-04 02:28:56.121994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.122089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.122096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:09.207 [2024-11-04 02:28:56.122106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:09.207 [2024-11-04 02:28:56.122112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.167258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.167289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.207 [2024-11-04 02:28:56.167299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.127 ms 00:18:09.207 [2024-11-04 02:28:56.167306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.167388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.167397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.207 [2024-11-04 02:28:56.167403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:09.207 [2024-11-04 02:28:56.167410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.167807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.167819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.207 [2024-11-04 02:28:56.167827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:18:09.207 [2024-11-04 02:28:56.167833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.167965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.167975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.207 [2024-11-04 02:28:56.167982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:18:09.207 [2024-11-04 02:28:56.167988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.180292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.180316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.207 [2024-11-04 02:28:56.180324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.285 ms 00:18:09.207 [2024-11-04 02:28:56.180330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.191063] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:09.207 [2024-11-04 02:28:56.191091] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:09.207 [2024-11-04 02:28:56.191101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.191108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:09.207 [2024-11-04 02:28:56.191115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.687 ms 00:18:09.207 [2024-11-04 02:28:56.191121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.210537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.210569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:09.207 [2024-11-04 02:28:56.210578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.358 ms 00:18:09.207 [2024-11-04 02:28:56.210585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.219685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.219722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:09.207 [2024-11-04 02:28:56.219731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.044 ms 00:18:09.207 [2024-11-04 02:28:56.219737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.228577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.228602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:09.207 [2024-11-04 02:28:56.228609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.799 ms 00:18:09.207 [2024-11-04 02:28:56.228615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.229086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.229103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:09.207 [2024-11-04 02:28:56.229111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:18:09.207 [2024-11-04 02:28:56.229117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.278079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.278112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:09.207 [2024-11-04 02:28:56.278123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.944 ms 00:18:09.207 [2024-11-04 02:28:56.278130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.286473] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:09.207 [2024-11-04 02:28:56.301127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.301154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:09.207 [2024-11-04 02:28:56.301166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.924 ms 00:18:09.207 [2024-11-04 02:28:56.301173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.301244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.301253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:09.207 [2024-11-04 02:28:56.301261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:09.207 [2024-11-04 02:28:56.301268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.301307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.301315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:09.207 [2024-11-04 02:28:56.301321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:09.207 [2024-11-04 02:28:56.301329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.301353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.301362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:09.207 [2024-11-04 02:28:56.301368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:09.207 [2024-11-04 02:28:56.301374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.207 [2024-11-04 02:28:56.301401] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:09.207 [2024-11-04 02:28:56.301409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.207 [2024-11-04 02:28:56.301415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:09.207 [2024-11-04 02:28:56.301423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:09.207 [2024-11-04 02:28:56.301429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.469 [2024-11-04 02:28:56.320611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.469 [2024-11-04 02:28:56.320637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:09.469 [2024-11-04 02:28:56.320647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.165 ms 00:18:09.469 [2024-11-04 02:28:56.320654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.469 [2024-11-04 02:28:56.320727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.469 [2024-11-04 02:28:56.320736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:09.469 [2024-11-04 02:28:56.320744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:09.469 [2024-11-04 02:28:56.320750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.469 [2024-11-04 02:28:56.321506] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:09.469 [2024-11-04 02:28:56.323749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 249.598 ms, result 0 00:18:09.469 [2024-11-04 02:28:56.324850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:09.469 [2024-11-04 02:28:56.335594] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:10.414  [2024-11-04T02:28:58.471Z] Copying: 15/256 [MB] (15 MBps) [2024-11-04T02:28:59.419Z] Copying: 27/256 [MB] (11 MBps) [2024-11-04T02:29:00.444Z] Copying: 37/256 [MB] (10 MBps) [2024-11-04T02:29:01.391Z] Copying: 49/256 [MB] (11 MBps) [2024-11-04T02:29:02.782Z] Copying: 60/256 [MB] (10 MBps) [2024-11-04T02:29:03.726Z] Copying: 71/256 [MB] (11 MBps) [2024-11-04T02:29:04.670Z] Copying: 87/256 [MB] (16 MBps) [2024-11-04T02:29:05.616Z] Copying: 98/256 [MB] (11 MBps) [2024-11-04T02:29:06.567Z] Copying: 114/256 [MB] (15 MBps) [2024-11-04T02:29:07.510Z] Copying: 125/256 [MB] (10 MBps) [2024-11-04T02:29:08.454Z] Copying: 136/256 [MB] (11 MBps) [2024-11-04T02:29:09.395Z] Copying: 161/256 [MB] (25 MBps) [2024-11-04T02:29:10.782Z] Copying: 176/256 [MB] (15 MBps) [2024-11-04T02:29:11.725Z] Copying: 200/256 [MB] (23 MBps) [2024-11-04T02:29:12.667Z] Copying: 217/256 [MB] (16 MBps) [2024-11-04T02:29:13.609Z] Copying: 234/256 [MB] (17 MBps) [2024-11-04T02:29:13.871Z] Copying: 247/256 [MB] (13 MBps) [2024-11-04T02:29:14.132Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-04 02:29:13.964281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:27.021 [2024-11-04 02:29:13.979272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.021 [2024-11-04 02:29:13.979330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:27.021 [2024-11-04 02:29:13.979347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:27.021 [2024-11-04 02:29:13.979357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.021 [2024-11-04 02:29:13.979397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:27.021 [2024-11-04 02:29:13.982819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.021 [2024-11-04 02:29:13.982879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:27.021 [2024-11-04 02:29:13.982894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.403 ms 00:18:27.021 [2024-11-04 02:29:13.982904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.021 [2024-11-04 02:29:13.983837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.021 [2024-11-04 02:29:13.983888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:27.021 [2024-11-04 02:29:13.983900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:18:27.021 [2024-11-04 02:29:13.983908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.021 [2024-11-04 02:29:13.987625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.021 [2024-11-04 02:29:13.987649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:27.021 [2024-11-04 02:29:13.987668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:18:27.021 [2024-11-04 02:29:13.987677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.021 [2024-11-04 02:29:13.994665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.021 [2024-11-04 02:29:13.994709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:27.021 [2024-11-04 02:29:13.994721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.966 ms 00:18:27.021 [2024-11-04 02:29:13.994730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.021 [2024-11-04 02:29:14.021031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.021 [2024-11-04 02:29:14.021081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:27.021 [2024-11-04 02:29:14.021096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.221 ms 00:18:27.021 [2024-11-04 02:29:14.021103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.021 [2024-11-04 02:29:14.036998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.021 [2024-11-04 02:29:14.037053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:27.021 [2024-11-04 02:29:14.037067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.838 ms 00:18:27.021 [2024-11-04 02:29:14.037080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.021 [2024-11-04 02:29:14.037243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.021 [2024-11-04 02:29:14.037255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:27.021 [2024-11-04 02:29:14.037266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:27.021 [2024-11-04 02:29:14.037274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.021 [2024-11-04 02:29:14.063294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.021 [2024-11-04 02:29:14.063342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:27.021 [2024-11-04 02:29:14.063356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.989 ms 00:18:27.021 [2024-11-04 02:29:14.063364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.021 [2024-11-04 02:29:14.089200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.021 [2024-11-04 02:29:14.089247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:27.021 [2024-11-04 02:29:14.089260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.765 ms 00:18:27.021 [2024-11-04 02:29:14.089268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.021 [2024-11-04 02:29:14.113793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.021 [2024-11-04 02:29:14.113842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:27.021 [2024-11-04 02:29:14.113855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.472 ms 00:18:27.021 [2024-11-04 02:29:14.113862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.284 [2024-11-04 02:29:14.138679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.284 [2024-11-04 02:29:14.138723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:27.284 [2024-11-04 02:29:14.138735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.721 ms 00:18:27.284 [2024-11-04 02:29:14.138743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.284 [2024-11-04 02:29:14.138794] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:27.284 [2024-11-04 02:29:14.138810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.138992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:27.284 [2024-11-04 02:29:14.139201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:27.285 [2024-11-04 02:29:14.139647] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:27.285 [2024-11-04 02:29:14.139655] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6f3f3b2-f208-412f-922b-b47e9684c98b 00:18:27.285 [2024-11-04 02:29:14.139664] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:27.285 [2024-11-04 02:29:14.139672] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:27.285 [2024-11-04 02:29:14.139679] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:27.285 [2024-11-04 02:29:14.139688] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:27.285 [2024-11-04 02:29:14.139696] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:27.285 [2024-11-04 02:29:14.139705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:27.285 [2024-11-04 02:29:14.139738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:27.285 [2024-11-04 02:29:14.139745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:27.285 [2024-11-04 02:29:14.139752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:27.285 [2024-11-04 02:29:14.139759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.285 [2024-11-04 02:29:14.139768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:27.285 [2024-11-04 02:29:14.139781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:18:27.285 [2024-11-04 02:29:14.139792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.285 [2024-11-04 02:29:14.153456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.285 [2024-11-04 02:29:14.153501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:27.285 [2024-11-04 02:29:14.153514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.625 ms 00:18:27.285 [2024-11-04 02:29:14.153523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.285 [2024-11-04 02:29:14.153958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.285 [2024-11-04 02:29:14.153977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:27.285 [2024-11-04 02:29:14.153988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:18:27.285 [2024-11-04 02:29:14.153997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.285 [2024-11-04 02:29:14.193608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.285 [2024-11-04 02:29:14.193656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:27.285 [2024-11-04 02:29:14.193667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.285 [2024-11-04 02:29:14.193675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.285 [2024-11-04 02:29:14.193793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.285 [2024-11-04 02:29:14.193803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:27.285 [2024-11-04 02:29:14.193812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.285 [2024-11-04 02:29:14.193821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.285 [2024-11-04 02:29:14.193898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.285 [2024-11-04 02:29:14.193909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:27.285 [2024-11-04 02:29:14.193918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.285 [2024-11-04 02:29:14.193926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.285 [2024-11-04 02:29:14.193945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.285 [2024-11-04 02:29:14.193953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:27.285 [2024-11-04 02:29:14.193965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.285 [2024-11-04 02:29:14.193973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.285 [2024-11-04 02:29:14.279912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.285 [2024-11-04 02:29:14.279971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:27.285 [2024-11-04 02:29:14.279986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.285 [2024-11-04 02:29:14.279994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.285 [2024-11-04 02:29:14.350535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.285 [2024-11-04 02:29:14.350599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:27.286 [2024-11-04 02:29:14.350612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.286 [2024-11-04 02:29:14.350620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.286 [2024-11-04 02:29:14.350706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.286 [2024-11-04 02:29:14.350717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:27.286 [2024-11-04 02:29:14.350727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.286 [2024-11-04 02:29:14.350736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.286 [2024-11-04 02:29:14.350768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.286 [2024-11-04 02:29:14.350778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:27.286 [2024-11-04 02:29:14.350787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.286 [2024-11-04 02:29:14.350798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.286 [2024-11-04 02:29:14.350920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.286 [2024-11-04 02:29:14.350931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:27.286 [2024-11-04 02:29:14.350940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.286 [2024-11-04 02:29:14.350948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.286 [2024-11-04 02:29:14.350985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.286 [2024-11-04 02:29:14.350994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:27.286 [2024-11-04 02:29:14.351003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.286 [2024-11-04 02:29:14.351014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.286 [2024-11-04 02:29:14.351060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.286 [2024-11-04 02:29:14.351070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:27.286 [2024-11-04 02:29:14.351079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.286 [2024-11-04 02:29:14.351087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.286 [2024-11-04 02:29:14.351138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.286 [2024-11-04 02:29:14.351149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:27.286 [2024-11-04 02:29:14.351157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.286 [2024-11-04 02:29:14.351168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.286 [2024-11-04 02:29:14.351328] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.053 ms, result 0 00:18:28.229 00:18:28.229 00:18:28.229 02:29:15 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:28.801 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:28.801 02:29:15 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:28.801 02:29:15 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:28.801 02:29:15 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:28.801 02:29:15 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:28.801 02:29:15 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:28.801 02:29:15 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:28.801 02:29:15 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74193 00:18:28.801 02:29:15 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74193 ']' 00:18:28.801 02:29:15 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74193 00:18:28.801 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74193) - No such process 00:18:28.801 Process with pid 74193 is not found 00:18:28.801 02:29:15 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 74193 is not found' 00:18:28.801 00:18:28.801 real 1m19.115s 00:18:28.801 user 1m43.024s 00:18:28.801 sys 0m6.060s 00:18:28.801 02:29:15 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:28.801 02:29:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:28.801 ************************************ 00:18:28.801 END TEST ftl_trim 00:18:28.801 ************************************ 00:18:28.801 02:29:15 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:28.801 02:29:15 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:28.801 02:29:15 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:28.801 02:29:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:28.801 ************************************ 00:18:28.801 START TEST ftl_restore 00:18:28.801 ************************************ 00:18:28.801 02:29:15 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:29.063 * Looking for test storage... 00:18:29.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:29.063 02:29:15 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:29.063 02:29:15 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:18:29.063 02:29:15 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:29.064 02:29:15 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:29.064 02:29:15 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:29.064 02:29:15 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:29.064 02:29:15 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:29.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.064 --rc genhtml_branch_coverage=1 00:18:29.064 --rc genhtml_function_coverage=1 00:18:29.064 --rc genhtml_legend=1 00:18:29.064 --rc geninfo_all_blocks=1 00:18:29.064 --rc geninfo_unexecuted_blocks=1 00:18:29.064 00:18:29.064 ' 00:18:29.064 02:29:15 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:29.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.064 --rc genhtml_branch_coverage=1 00:18:29.064 --rc genhtml_function_coverage=1 00:18:29.064 --rc genhtml_legend=1 00:18:29.064 --rc geninfo_all_blocks=1 00:18:29.064 --rc geninfo_unexecuted_blocks=1 00:18:29.064 00:18:29.064 ' 00:18:29.064 02:29:15 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:29.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.064 --rc genhtml_branch_coverage=1 00:18:29.064 --rc genhtml_function_coverage=1 00:18:29.064 --rc genhtml_legend=1 00:18:29.064 --rc geninfo_all_blocks=1 00:18:29.064 --rc geninfo_unexecuted_blocks=1 00:18:29.064 00:18:29.064 ' 00:18:29.064 02:29:15 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:29.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:29.064 --rc genhtml_branch_coverage=1 00:18:29.064 --rc genhtml_function_coverage=1 00:18:29.064 --rc genhtml_legend=1 00:18:29.064 --rc geninfo_all_blocks=1 00:18:29.064 --rc geninfo_unexecuted_blocks=1 00:18:29.064 00:18:29.064 ' 00:18:29.064 02:29:15 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.LYP9nketjb 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74525 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74525 00:18:29.064 02:29:16 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:29.064 02:29:16 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 74525 ']' 00:18:29.064 02:29:16 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.064 02:29:16 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:29.064 02:29:16 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.064 02:29:16 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:29.064 02:29:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:29.064 [2024-11-04 02:29:16.117088] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:18:29.064 [2024-11-04 02:29:16.117240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74525 ] 00:18:29.360 [2024-11-04 02:29:16.277726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.360 [2024-11-04 02:29:16.399025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.303 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:30.303 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:18:30.303 02:29:17 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:30.303 02:29:17 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:30.303 02:29:17 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:30.303 02:29:17 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:30.303 02:29:17 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:30.303 02:29:17 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:30.303 02:29:17 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:30.303 02:29:17 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:30.303 02:29:17 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:30.303 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:30.303 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:30.303 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:30.303 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:30.303 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:30.875 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:30.875 { 00:18:30.875 "name": "nvme0n1", 00:18:30.875 "aliases": [ 00:18:30.875 "6260a82d-2cc3-48a2-aaa2-284450ea76fc" 00:18:30.875 ], 00:18:30.875 "product_name": "NVMe disk", 00:18:30.875 "block_size": 4096, 00:18:30.875 "num_blocks": 1310720, 00:18:30.875 "uuid": "6260a82d-2cc3-48a2-aaa2-284450ea76fc", 00:18:30.875 "numa_id": -1, 00:18:30.875 "assigned_rate_limits": { 00:18:30.875 "rw_ios_per_sec": 0, 00:18:30.875 "rw_mbytes_per_sec": 0, 00:18:30.875 "r_mbytes_per_sec": 0, 00:18:30.875 "w_mbytes_per_sec": 0 00:18:30.875 }, 00:18:30.875 "claimed": true, 00:18:30.875 "claim_type": "read_many_write_one", 00:18:30.875 "zoned": false, 00:18:30.875 "supported_io_types": { 00:18:30.875 "read": true, 00:18:30.875 "write": true, 00:18:30.875 "unmap": true, 00:18:30.875 "flush": true, 00:18:30.875 "reset": true, 00:18:30.875 "nvme_admin": true, 00:18:30.875 "nvme_io": true, 00:18:30.875 "nvme_io_md": false, 00:18:30.875 "write_zeroes": true, 00:18:30.875 "zcopy": false, 00:18:30.875 "get_zone_info": false, 00:18:30.875 "zone_management": false, 00:18:30.875 "zone_append": false, 00:18:30.875 "compare": true, 00:18:30.875 "compare_and_write": false, 00:18:30.875 "abort": true, 00:18:30.875 "seek_hole": false, 00:18:30.875 "seek_data": false, 00:18:30.875 "copy": true, 00:18:30.875 "nvme_iov_md": false 00:18:30.875 }, 00:18:30.875 "driver_specific": { 00:18:30.875 "nvme": [ 00:18:30.875 { 00:18:30.875 "pci_address": "0000:00:11.0", 00:18:30.875 "trid": { 00:18:30.875 "trtype": "PCIe", 00:18:30.875 "traddr": "0000:00:11.0" 00:18:30.875 }, 00:18:30.875 "ctrlr_data": { 00:18:30.875 "cntlid": 0, 00:18:30.875 "vendor_id": "0x1b36", 00:18:30.875 "model_number": "QEMU NVMe Ctrl", 00:18:30.875 "serial_number": "12341", 00:18:30.875 "firmware_revision": "8.0.0", 00:18:30.875 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:30.875 "oacs": { 00:18:30.875 "security": 0, 00:18:30.875 "format": 1, 00:18:30.875 "firmware": 0, 00:18:30.875 "ns_manage": 1 00:18:30.875 }, 00:18:30.875 "multi_ctrlr": false, 00:18:30.875 "ana_reporting": false 00:18:30.875 }, 00:18:30.875 "vs": { 00:18:30.875 "nvme_version": "1.4" 00:18:30.875 }, 00:18:30.875 "ns_data": { 00:18:30.875 "id": 1, 00:18:30.875 "can_share": false 00:18:30.875 } 00:18:30.875 } 00:18:30.875 ], 00:18:30.875 "mp_policy": "active_passive" 00:18:30.875 } 00:18:30.875 } 00:18:30.875 ]' 00:18:30.875 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:30.875 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:30.875 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:30.875 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:30.875 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:30.875 02:29:17 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:18:30.875 02:29:17 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:30.875 02:29:17 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:30.875 02:29:17 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:30.875 02:29:17 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:30.875 02:29:17 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:30.875 02:29:17 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=5516dc46-f9ca-4d88-8075-da5f44f4df17 00:18:30.875 02:29:17 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:30.875 02:29:17 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5516dc46-f9ca-4d88-8075-da5f44f4df17 00:18:31.136 02:29:18 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:31.398 02:29:18 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=ceeb90ab-717b-46d5-8b60-bbe967283d88 00:18:31.398 02:29:18 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ceeb90ab-717b-46d5-8b60-bbe967283d88 00:18:31.659 02:29:18 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=5ef340f2-79de-428a-a780-05763527377b 00:18:31.659 02:29:18 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:31.659 02:29:18 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5ef340f2-79de-428a-a780-05763527377b 00:18:31.659 02:29:18 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:31.659 02:29:18 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:31.659 02:29:18 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=5ef340f2-79de-428a-a780-05763527377b 00:18:31.659 02:29:18 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:31.659 02:29:18 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 5ef340f2-79de-428a-a780-05763527377b 00:18:31.659 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=5ef340f2-79de-428a-a780-05763527377b 00:18:31.659 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:31.659 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:31.659 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:31.660 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ef340f2-79de-428a-a780-05763527377b 00:18:31.922 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:31.922 { 00:18:31.922 "name": "5ef340f2-79de-428a-a780-05763527377b", 00:18:31.922 "aliases": [ 00:18:31.922 "lvs/nvme0n1p0" 00:18:31.922 ], 00:18:31.922 "product_name": "Logical Volume", 00:18:31.922 "block_size": 4096, 00:18:31.922 "num_blocks": 26476544, 00:18:31.922 "uuid": "5ef340f2-79de-428a-a780-05763527377b", 00:18:31.922 "assigned_rate_limits": { 00:18:31.922 "rw_ios_per_sec": 0, 00:18:31.922 "rw_mbytes_per_sec": 0, 00:18:31.922 "r_mbytes_per_sec": 0, 00:18:31.922 "w_mbytes_per_sec": 0 00:18:31.922 }, 00:18:31.922 "claimed": false, 00:18:31.922 "zoned": false, 00:18:31.922 "supported_io_types": { 00:18:31.922 "read": true, 00:18:31.922 "write": true, 00:18:31.922 "unmap": true, 00:18:31.922 "flush": false, 00:18:31.922 "reset": true, 00:18:31.922 "nvme_admin": false, 00:18:31.922 "nvme_io": false, 00:18:31.922 "nvme_io_md": false, 00:18:31.922 "write_zeroes": true, 00:18:31.922 "zcopy": false, 00:18:31.922 "get_zone_info": false, 00:18:31.922 "zone_management": false, 00:18:31.922 "zone_append": false, 00:18:31.922 "compare": false, 00:18:31.922 "compare_and_write": false, 00:18:31.922 "abort": false, 00:18:31.922 "seek_hole": true, 00:18:31.922 "seek_data": true, 00:18:31.922 "copy": false, 00:18:31.922 "nvme_iov_md": false 00:18:31.922 }, 00:18:31.922 "driver_specific": { 00:18:31.922 "lvol": { 00:18:31.922 "lvol_store_uuid": "ceeb90ab-717b-46d5-8b60-bbe967283d88", 00:18:31.922 "base_bdev": "nvme0n1", 00:18:31.922 "thin_provision": true, 00:18:31.922 "num_allocated_clusters": 0, 00:18:31.922 "snapshot": false, 00:18:31.922 "clone": false, 00:18:31.922 "esnap_clone": false 00:18:31.922 } 00:18:31.922 } 00:18:31.922 } 00:18:31.922 ]' 00:18:31.922 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:31.922 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:31.922 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:31.922 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:31.922 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:31.922 02:29:18 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:31.922 02:29:18 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:31.922 02:29:18 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:31.922 02:29:18 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:32.184 02:29:19 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:32.184 02:29:19 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:32.184 02:29:19 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 5ef340f2-79de-428a-a780-05763527377b 00:18:32.184 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=5ef340f2-79de-428a-a780-05763527377b 00:18:32.184 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:32.184 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:32.184 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:32.184 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ef340f2-79de-428a-a780-05763527377b 00:18:32.444 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:32.444 { 00:18:32.444 "name": "5ef340f2-79de-428a-a780-05763527377b", 00:18:32.444 "aliases": [ 00:18:32.444 "lvs/nvme0n1p0" 00:18:32.444 ], 00:18:32.444 "product_name": "Logical Volume", 00:18:32.444 "block_size": 4096, 00:18:32.444 "num_blocks": 26476544, 00:18:32.444 "uuid": "5ef340f2-79de-428a-a780-05763527377b", 00:18:32.444 "assigned_rate_limits": { 00:18:32.444 "rw_ios_per_sec": 0, 00:18:32.444 "rw_mbytes_per_sec": 0, 00:18:32.444 "r_mbytes_per_sec": 0, 00:18:32.444 "w_mbytes_per_sec": 0 00:18:32.444 }, 00:18:32.444 "claimed": false, 00:18:32.444 "zoned": false, 00:18:32.444 "supported_io_types": { 00:18:32.444 "read": true, 00:18:32.444 "write": true, 00:18:32.444 "unmap": true, 00:18:32.444 "flush": false, 00:18:32.444 "reset": true, 00:18:32.444 "nvme_admin": false, 00:18:32.444 "nvme_io": false, 00:18:32.444 "nvme_io_md": false, 00:18:32.444 "write_zeroes": true, 00:18:32.444 "zcopy": false, 00:18:32.444 "get_zone_info": false, 00:18:32.444 "zone_management": false, 00:18:32.444 "zone_append": false, 00:18:32.444 "compare": false, 00:18:32.444 "compare_and_write": false, 00:18:32.444 "abort": false, 00:18:32.444 "seek_hole": true, 00:18:32.444 "seek_data": true, 00:18:32.444 "copy": false, 00:18:32.444 "nvme_iov_md": false 00:18:32.444 }, 00:18:32.444 "driver_specific": { 00:18:32.444 "lvol": { 00:18:32.444 "lvol_store_uuid": "ceeb90ab-717b-46d5-8b60-bbe967283d88", 00:18:32.444 "base_bdev": "nvme0n1", 00:18:32.444 "thin_provision": true, 00:18:32.444 "num_allocated_clusters": 0, 00:18:32.444 "snapshot": false, 00:18:32.444 "clone": false, 00:18:32.444 "esnap_clone": false 00:18:32.444 } 00:18:32.444 } 00:18:32.444 } 00:18:32.444 ]' 00:18:32.444 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:32.444 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:32.444 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:32.444 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:32.444 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:32.444 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:32.444 02:29:19 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:32.444 02:29:19 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:32.706 02:29:19 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:32.706 02:29:19 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 5ef340f2-79de-428a-a780-05763527377b 00:18:32.706 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=5ef340f2-79de-428a-a780-05763527377b 00:18:32.706 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:32.706 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:32.706 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:32.706 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ef340f2-79de-428a-a780-05763527377b 00:18:32.966 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:32.966 { 00:18:32.966 "name": "5ef340f2-79de-428a-a780-05763527377b", 00:18:32.966 "aliases": [ 00:18:32.966 "lvs/nvme0n1p0" 00:18:32.966 ], 00:18:32.966 "product_name": "Logical Volume", 00:18:32.966 "block_size": 4096, 00:18:32.966 "num_blocks": 26476544, 00:18:32.966 "uuid": "5ef340f2-79de-428a-a780-05763527377b", 00:18:32.966 "assigned_rate_limits": { 00:18:32.966 "rw_ios_per_sec": 0, 00:18:32.966 "rw_mbytes_per_sec": 0, 00:18:32.966 "r_mbytes_per_sec": 0, 00:18:32.966 "w_mbytes_per_sec": 0 00:18:32.966 }, 00:18:32.966 "claimed": false, 00:18:32.966 "zoned": false, 00:18:32.966 "supported_io_types": { 00:18:32.966 "read": true, 00:18:32.966 "write": true, 00:18:32.966 "unmap": true, 00:18:32.966 "flush": false, 00:18:32.966 "reset": true, 00:18:32.966 "nvme_admin": false, 00:18:32.966 "nvme_io": false, 00:18:32.966 "nvme_io_md": false, 00:18:32.966 "write_zeroes": true, 00:18:32.966 "zcopy": false, 00:18:32.966 "get_zone_info": false, 00:18:32.966 "zone_management": false, 00:18:32.966 "zone_append": false, 00:18:32.966 "compare": false, 00:18:32.966 "compare_and_write": false, 00:18:32.966 "abort": false, 00:18:32.966 "seek_hole": true, 00:18:32.966 "seek_data": true, 00:18:32.966 "copy": false, 00:18:32.966 "nvme_iov_md": false 00:18:32.966 }, 00:18:32.966 "driver_specific": { 00:18:32.966 "lvol": { 00:18:32.967 "lvol_store_uuid": "ceeb90ab-717b-46d5-8b60-bbe967283d88", 00:18:32.967 "base_bdev": "nvme0n1", 00:18:32.967 "thin_provision": true, 00:18:32.967 "num_allocated_clusters": 0, 00:18:32.967 "snapshot": false, 00:18:32.967 "clone": false, 00:18:32.967 "esnap_clone": false 00:18:32.967 } 00:18:32.967 } 00:18:32.967 } 00:18:32.967 ]' 00:18:32.967 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:32.967 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:32.967 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:32.967 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:32.967 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:32.967 02:29:19 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:32.967 02:29:19 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:32.967 02:29:19 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5ef340f2-79de-428a-a780-05763527377b --l2p_dram_limit 10' 00:18:32.967 02:29:19 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:32.967 02:29:19 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:32.967 02:29:19 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:32.967 02:29:19 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:32.967 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:32.967 02:29:19 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5ef340f2-79de-428a-a780-05763527377b --l2p_dram_limit 10 -c nvc0n1p0 00:18:33.226 [2024-11-04 02:29:20.184104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.226 [2024-11-04 02:29:20.184235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:33.226 [2024-11-04 02:29:20.184256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:33.226 [2024-11-04 02:29:20.184263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.226 [2024-11-04 02:29:20.184316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.226 [2024-11-04 02:29:20.184324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.226 [2024-11-04 02:29:20.184331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:33.226 [2024-11-04 02:29:20.184337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.226 [2024-11-04 02:29:20.184357] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:33.226 [2024-11-04 02:29:20.184941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:33.226 [2024-11-04 02:29:20.184958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.226 [2024-11-04 02:29:20.184964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.226 [2024-11-04 02:29:20.184972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:18:33.226 [2024-11-04 02:29:20.184978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.226 [2024-11-04 02:29:20.185241] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID eb37cb69-a1b4-437f-9116-fc28bc24e968 00:18:33.226 [2024-11-04 02:29:20.186204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.226 [2024-11-04 02:29:20.186237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:33.226 [2024-11-04 02:29:20.186245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:33.226 [2024-11-04 02:29:20.186255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.226 [2024-11-04 02:29:20.190959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.226 [2024-11-04 02:29:20.191071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.226 [2024-11-04 02:29:20.191084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 00:18:33.226 [2024-11-04 02:29:20.191093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.226 [2024-11-04 02:29:20.191160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.226 [2024-11-04 02:29:20.191169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.226 [2024-11-04 02:29:20.191175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:33.226 [2024-11-04 02:29:20.191185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.226 [2024-11-04 02:29:20.191229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.226 [2024-11-04 02:29:20.191239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:33.226 [2024-11-04 02:29:20.191245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:33.226 [2024-11-04 02:29:20.191252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.226 [2024-11-04 02:29:20.191271] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:33.226 [2024-11-04 02:29:20.194124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.226 [2024-11-04 02:29:20.194216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.226 [2024-11-04 02:29:20.194230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.858 ms 00:18:33.226 [2024-11-04 02:29:20.194239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.226 [2024-11-04 02:29:20.194266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.226 [2024-11-04 02:29:20.194273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:33.226 [2024-11-04 02:29:20.194280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:33.226 [2024-11-04 02:29:20.194286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.226 [2024-11-04 02:29:20.194300] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:33.226 [2024-11-04 02:29:20.194403] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:33.226 [2024-11-04 02:29:20.194415] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:33.226 [2024-11-04 02:29:20.194423] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:33.226 [2024-11-04 02:29:20.194433] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:33.226 [2024-11-04 02:29:20.194440] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:33.226 [2024-11-04 02:29:20.194448] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:33.227 [2024-11-04 02:29:20.194453] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:33.227 [2024-11-04 02:29:20.194460] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:33.227 [2024-11-04 02:29:20.194465] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:33.227 [2024-11-04 02:29:20.194474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.227 [2024-11-04 02:29:20.194479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:33.227 [2024-11-04 02:29:20.194486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:18:33.227 [2024-11-04 02:29:20.194497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.227 [2024-11-04 02:29:20.194562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.227 [2024-11-04 02:29:20.194569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:33.227 [2024-11-04 02:29:20.194576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:33.227 [2024-11-04 02:29:20.194581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.227 [2024-11-04 02:29:20.194655] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:33.227 [2024-11-04 02:29:20.194665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:33.227 [2024-11-04 02:29:20.194673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.227 [2024-11-04 02:29:20.194679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:33.227 [2024-11-04 02:29:20.194692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:33.227 [2024-11-04 02:29:20.194703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:33.227 [2024-11-04 02:29:20.194710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.227 [2024-11-04 02:29:20.194721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:33.227 [2024-11-04 02:29:20.194726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:33.227 [2024-11-04 02:29:20.194733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.227 [2024-11-04 02:29:20.194738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:33.227 [2024-11-04 02:29:20.194745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:33.227 [2024-11-04 02:29:20.194752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:33.227 [2024-11-04 02:29:20.194765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:33.227 [2024-11-04 02:29:20.194772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:33.227 [2024-11-04 02:29:20.194784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.227 [2024-11-04 02:29:20.194796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:33.227 [2024-11-04 02:29:20.194801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.227 [2024-11-04 02:29:20.194812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:33.227 [2024-11-04 02:29:20.194818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.227 [2024-11-04 02:29:20.194829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:33.227 [2024-11-04 02:29:20.194834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.227 [2024-11-04 02:29:20.194845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:33.227 [2024-11-04 02:29:20.194853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.227 [2024-11-04 02:29:20.194879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:33.227 [2024-11-04 02:29:20.194885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:33.227 [2024-11-04 02:29:20.194892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.227 [2024-11-04 02:29:20.194897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:33.227 [2024-11-04 02:29:20.194904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:33.227 [2024-11-04 02:29:20.194908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:33.227 [2024-11-04 02:29:20.194920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:33.227 [2024-11-04 02:29:20.194928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194933] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:33.227 [2024-11-04 02:29:20.194940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:33.227 [2024-11-04 02:29:20.194946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.227 [2024-11-04 02:29:20.194953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.227 [2024-11-04 02:29:20.194959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:33.227 [2024-11-04 02:29:20.194969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:33.227 [2024-11-04 02:29:20.194974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:33.227 [2024-11-04 02:29:20.194981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:33.227 [2024-11-04 02:29:20.194986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:33.227 [2024-11-04 02:29:20.194992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:33.227 [2024-11-04 02:29:20.195000] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:33.227 [2024-11-04 02:29:20.195009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.227 [2024-11-04 02:29:20.195015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:33.227 [2024-11-04 02:29:20.195022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:33.227 [2024-11-04 02:29:20.195028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:33.227 [2024-11-04 02:29:20.195035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:33.227 [2024-11-04 02:29:20.195040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:33.227 [2024-11-04 02:29:20.195047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:33.227 [2024-11-04 02:29:20.195052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:33.227 [2024-11-04 02:29:20.195059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:33.227 [2024-11-04 02:29:20.195064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:33.227 [2024-11-04 02:29:20.195071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:33.227 [2024-11-04 02:29:20.195077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:33.227 [2024-11-04 02:29:20.195084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:33.227 [2024-11-04 02:29:20.195089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:33.227 [2024-11-04 02:29:20.195096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:33.227 [2024-11-04 02:29:20.195101] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:33.227 [2024-11-04 02:29:20.195109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.227 [2024-11-04 02:29:20.195117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:33.227 [2024-11-04 02:29:20.195124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:33.227 [2024-11-04 02:29:20.195130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:33.227 [2024-11-04 02:29:20.195137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:33.227 [2024-11-04 02:29:20.195142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.227 [2024-11-04 02:29:20.195150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:33.227 [2024-11-04 02:29:20.195156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:18:33.227 [2024-11-04 02:29:20.195162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.227 [2024-11-04 02:29:20.195204] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:33.227 [2024-11-04 02:29:20.195215] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:37.438 [2024-11-04 02:29:24.001330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.001414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:37.438 [2024-11-04 02:29:24.001433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3806.109 ms 00:18:37.438 [2024-11-04 02:29:24.001445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.033341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.033414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:37.438 [2024-11-04 02:29:24.033429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.650 ms 00:18:37.438 [2024-11-04 02:29:24.033441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.033586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.033600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:37.438 [2024-11-04 02:29:24.033610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:37.438 [2024-11-04 02:29:24.033623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.069336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.069389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:37.438 [2024-11-04 02:29:24.069401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.675 ms 00:18:37.438 [2024-11-04 02:29:24.069412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.069448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.069460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:37.438 [2024-11-04 02:29:24.069470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:37.438 [2024-11-04 02:29:24.069483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.070103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.070129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:37.438 [2024-11-04 02:29:24.070139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:18:37.438 [2024-11-04 02:29:24.070150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.070267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.070279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:37.438 [2024-11-04 02:29:24.070288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:37.438 [2024-11-04 02:29:24.070301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.087830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.088056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:37.438 [2024-11-04 02:29:24.088077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.507 ms 00:18:37.438 [2024-11-04 02:29:24.088090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.101216] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:37.438 [2024-11-04 02:29:24.105093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.105139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:37.438 [2024-11-04 02:29:24.105153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.903 ms 00:18:37.438 [2024-11-04 02:29:24.105161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.221677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.221943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:37.438 [2024-11-04 02:29:24.221976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.477 ms 00:18:37.438 [2024-11-04 02:29:24.221988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.222197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.222210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:37.438 [2024-11-04 02:29:24.222225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:18:37.438 [2024-11-04 02:29:24.222237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.248871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.249053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:37.438 [2024-11-04 02:29:24.249082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.562 ms 00:18:37.438 [2024-11-04 02:29:24.249091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.273514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.273560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:37.438 [2024-11-04 02:29:24.273576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.372 ms 00:18:37.438 [2024-11-04 02:29:24.273584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.274247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.274274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:37.438 [2024-11-04 02:29:24.274286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:18:37.438 [2024-11-04 02:29:24.274295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.360587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.360775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:37.438 [2024-11-04 02:29:24.360806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.221 ms 00:18:37.438 [2024-11-04 02:29:24.360815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.388256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.388305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:37.438 [2024-11-04 02:29:24.388326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.291 ms 00:18:37.438 [2024-11-04 02:29:24.388334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.414266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.414316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:37.438 [2024-11-04 02:29:24.414332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.877 ms 00:18:37.438 [2024-11-04 02:29:24.414340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.440648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.440706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:37.438 [2024-11-04 02:29:24.440722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.254 ms 00:18:37.438 [2024-11-04 02:29:24.440730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.440785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.440794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:37.438 [2024-11-04 02:29:24.440809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:37.438 [2024-11-04 02:29:24.440817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.440933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.438 [2024-11-04 02:29:24.440945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:37.438 [2024-11-04 02:29:24.440957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:37.438 [2024-11-04 02:29:24.440964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.438 [2024-11-04 02:29:24.442149] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4257.506 ms, result 0 00:18:37.438 { 00:18:37.438 "name": "ftl0", 00:18:37.438 "uuid": "eb37cb69-a1b4-437f-9116-fc28bc24e968" 00:18:37.438 } 00:18:37.438 02:29:24 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:37.438 02:29:24 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:37.699 02:29:24 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:37.699 02:29:24 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:37.962 [2024-11-04 02:29:24.881491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:24.881547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:37.962 [2024-11-04 02:29:24.881559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:37.962 [2024-11-04 02:29:24.881581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.962 [2024-11-04 02:29:24.881607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:37.962 [2024-11-04 02:29:24.884715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:24.884760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:37.962 [2024-11-04 02:29:24.884775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:18:37.962 [2024-11-04 02:29:24.884784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.962 [2024-11-04 02:29:24.885079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:24.885112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:37.962 [2024-11-04 02:29:24.885126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:18:37.962 [2024-11-04 02:29:24.885138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.962 [2024-11-04 02:29:24.888392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:24.888417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:37.962 [2024-11-04 02:29:24.888431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:18:37.962 [2024-11-04 02:29:24.888441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.962 [2024-11-04 02:29:24.894754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:24.894790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:37.962 [2024-11-04 02:29:24.894804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.289 ms 00:18:37.962 [2024-11-04 02:29:24.894812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.962 [2024-11-04 02:29:24.920730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:24.920777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:37.962 [2024-11-04 02:29:24.920792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.808 ms 00:18:37.962 [2024-11-04 02:29:24.920799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.962 [2024-11-04 02:29:24.939188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:24.939239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:37.962 [2024-11-04 02:29:24.939254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.329 ms 00:18:37.962 [2024-11-04 02:29:24.939263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.962 [2024-11-04 02:29:24.939443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:24.939456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:37.962 [2024-11-04 02:29:24.939468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:37.962 [2024-11-04 02:29:24.939476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.962 [2024-11-04 02:29:24.965266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:24.965313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:37.962 [2024-11-04 02:29:24.965329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.766 ms 00:18:37.962 [2024-11-04 02:29:24.965336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.962 [2024-11-04 02:29:24.990925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:24.990973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:37.962 [2024-11-04 02:29:24.990988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.532 ms 00:18:37.962 [2024-11-04 02:29:24.990996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.962 [2024-11-04 02:29:25.015684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:25.015752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:37.962 [2024-11-04 02:29:25.015767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.627 ms 00:18:37.962 [2024-11-04 02:29:25.015775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.962 [2024-11-04 02:29:25.040193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.962 [2024-11-04 02:29:25.040237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:37.962 [2024-11-04 02:29:25.040253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.278 ms 00:18:37.962 [2024-11-04 02:29:25.040260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.963 [2024-11-04 02:29:25.040309] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:37.963 [2024-11-04 02:29:25.040326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.040990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:37.963 [2024-11-04 02:29:25.041137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:37.964 [2024-11-04 02:29:25.041278] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:37.964 [2024-11-04 02:29:25.041288] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eb37cb69-a1b4-437f-9116-fc28bc24e968 00:18:37.964 [2024-11-04 02:29:25.041297] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:37.964 [2024-11-04 02:29:25.041311] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:37.964 [2024-11-04 02:29:25.041319] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:37.964 [2024-11-04 02:29:25.041328] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:37.964 [2024-11-04 02:29:25.041339] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:37.964 [2024-11-04 02:29:25.041348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:37.964 [2024-11-04 02:29:25.041357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:37.964 [2024-11-04 02:29:25.041366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:37.964 [2024-11-04 02:29:25.041372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:37.964 [2024-11-04 02:29:25.041381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.964 [2024-11-04 02:29:25.041389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:37.964 [2024-11-04 02:29:25.041399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:18:37.964 [2024-11-04 02:29:25.041406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.964 [2024-11-04 02:29:25.054914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.964 [2024-11-04 02:29:25.054954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:37.964 [2024-11-04 02:29:25.054968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.460 ms 00:18:37.964 [2024-11-04 02:29:25.054976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.964 [2024-11-04 02:29:25.055381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.964 [2024-11-04 02:29:25.055400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:37.964 [2024-11-04 02:29:25.055411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:18:37.964 [2024-11-04 02:29:25.055419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.225 [2024-11-04 02:29:25.102191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.225 [2024-11-04 02:29:25.102383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.225 [2024-11-04 02:29:25.102411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.225 [2024-11-04 02:29:25.102420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.225 [2024-11-04 02:29:25.102504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.225 [2024-11-04 02:29:25.102514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.225 [2024-11-04 02:29:25.102525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.225 [2024-11-04 02:29:25.102533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.225 [2024-11-04 02:29:25.102646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.225 [2024-11-04 02:29:25.102658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.225 [2024-11-04 02:29:25.102669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.225 [2024-11-04 02:29:25.102676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.225 [2024-11-04 02:29:25.102701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.225 [2024-11-04 02:29:25.102709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.225 [2024-11-04 02:29:25.102718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.225 [2024-11-04 02:29:25.102727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.225 [2024-11-04 02:29:25.187119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.225 [2024-11-04 02:29:25.187181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.225 [2024-11-04 02:29:25.187196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.225 [2024-11-04 02:29:25.187205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.225 [2024-11-04 02:29:25.255445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.225 [2024-11-04 02:29:25.255502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.225 [2024-11-04 02:29:25.255518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.225 [2024-11-04 02:29:25.255527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.225 [2024-11-04 02:29:25.255622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.225 [2024-11-04 02:29:25.255636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:38.225 [2024-11-04 02:29:25.255648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.225 [2024-11-04 02:29:25.255656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.225 [2024-11-04 02:29:25.255752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.225 [2024-11-04 02:29:25.255764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:38.226 [2024-11-04 02:29:25.255774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.226 [2024-11-04 02:29:25.255783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.226 [2024-11-04 02:29:25.255920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.226 [2024-11-04 02:29:25.255933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:38.226 [2024-11-04 02:29:25.255946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.226 [2024-11-04 02:29:25.255955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.226 [2024-11-04 02:29:25.255993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.226 [2024-11-04 02:29:25.256003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:38.226 [2024-11-04 02:29:25.256013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.226 [2024-11-04 02:29:25.256022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.226 [2024-11-04 02:29:25.256066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.226 [2024-11-04 02:29:25.256076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:38.226 [2024-11-04 02:29:25.256089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.226 [2024-11-04 02:29:25.256097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.226 [2024-11-04 02:29:25.256153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:38.226 [2024-11-04 02:29:25.256165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:38.226 [2024-11-04 02:29:25.256175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:38.226 [2024-11-04 02:29:25.256184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.226 [2024-11-04 02:29:25.256336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 374.799 ms, result 0 00:18:38.226 true 00:18:38.226 02:29:25 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74525 00:18:38.226 02:29:25 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74525 ']' 00:18:38.226 02:29:25 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74525 00:18:38.226 02:29:25 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:18:38.226 02:29:25 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:38.226 02:29:25 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74525 00:18:38.226 killing process with pid 74525 00:18:38.226 02:29:25 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:38.226 02:29:25 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:38.226 02:29:25 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74525' 00:18:38.226 02:29:25 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 74525 00:18:38.226 02:29:25 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 74525 00:18:44.814 02:29:31 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:48.111 262144+0 records in 00:18:48.111 262144+0 records out 00:18:48.111 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.01606 s, 267 MB/s 00:18:48.111 02:29:35 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:50.659 02:29:37 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:50.659 [2024-11-04 02:29:37.417399] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:18:50.659 [2024-11-04 02:29:37.417529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74762 ] 00:18:50.659 [2024-11-04 02:29:37.578220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.659 [2024-11-04 02:29:37.697950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.921 [2024-11-04 02:29:37.963688] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:50.921 [2024-11-04 02:29:37.963782] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:51.183 [2024-11-04 02:29:38.127790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.183 [2024-11-04 02:29:38.127852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:51.183 [2024-11-04 02:29:38.127889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:51.183 [2024-11-04 02:29:38.127899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.183 [2024-11-04 02:29:38.127961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.183 [2024-11-04 02:29:38.127972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:51.183 [2024-11-04 02:29:38.127983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:51.183 [2024-11-04 02:29:38.127991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.183 [2024-11-04 02:29:38.128014] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:51.183 [2024-11-04 02:29:38.129034] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:51.183 [2024-11-04 02:29:38.129078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.183 [2024-11-04 02:29:38.129089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:51.183 [2024-11-04 02:29:38.129099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:18:51.183 [2024-11-04 02:29:38.129108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.183 [2024-11-04 02:29:38.130973] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:51.183 [2024-11-04 02:29:38.145258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.183 [2024-11-04 02:29:38.145308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:51.183 [2024-11-04 02:29:38.145321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.288 ms 00:18:51.184 [2024-11-04 02:29:38.145330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.184 [2024-11-04 02:29:38.145410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.184 [2024-11-04 02:29:38.145424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:51.184 [2024-11-04 02:29:38.145434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:51.184 [2024-11-04 02:29:38.145442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.184 [2024-11-04 02:29:38.153576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.184 [2024-11-04 02:29:38.153622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:51.184 [2024-11-04 02:29:38.153633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.053 ms 00:18:51.184 [2024-11-04 02:29:38.153642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.184 [2024-11-04 02:29:38.153729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.184 [2024-11-04 02:29:38.153738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:51.184 [2024-11-04 02:29:38.153748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:51.184 [2024-11-04 02:29:38.153755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.184 [2024-11-04 02:29:38.153800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.184 [2024-11-04 02:29:38.153810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:51.184 [2024-11-04 02:29:38.153819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:51.184 [2024-11-04 02:29:38.153827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.184 [2024-11-04 02:29:38.153851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:51.184 [2024-11-04 02:29:38.157924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.184 [2024-11-04 02:29:38.157964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:51.184 [2024-11-04 02:29:38.157975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.079 ms 00:18:51.184 [2024-11-04 02:29:38.157987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.184 [2024-11-04 02:29:38.158022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.184 [2024-11-04 02:29:38.158031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:51.184 [2024-11-04 02:29:38.158041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:51.184 [2024-11-04 02:29:38.158049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.184 [2024-11-04 02:29:38.158102] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:51.184 [2024-11-04 02:29:38.158125] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:51.184 [2024-11-04 02:29:38.158163] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:51.184 [2024-11-04 02:29:38.158182] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:51.184 [2024-11-04 02:29:38.158287] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:51.184 [2024-11-04 02:29:38.158300] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:51.184 [2024-11-04 02:29:38.158311] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:51.184 [2024-11-04 02:29:38.158322] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:51.184 [2024-11-04 02:29:38.158331] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:51.184 [2024-11-04 02:29:38.158340] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:51.184 [2024-11-04 02:29:38.158348] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:51.184 [2024-11-04 02:29:38.158356] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:51.184 [2024-11-04 02:29:38.158364] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:51.184 [2024-11-04 02:29:38.158376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.184 [2024-11-04 02:29:38.158384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:51.184 [2024-11-04 02:29:38.158392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:18:51.184 [2024-11-04 02:29:38.158399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.184 [2024-11-04 02:29:38.158482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.184 [2024-11-04 02:29:38.158491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:51.184 [2024-11-04 02:29:38.158500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:51.184 [2024-11-04 02:29:38.158507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.184 [2024-11-04 02:29:38.158611] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:51.184 [2024-11-04 02:29:38.158630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:51.184 [2024-11-04 02:29:38.158642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:51.184 [2024-11-04 02:29:38.158650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:51.184 [2024-11-04 02:29:38.158665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:51.184 [2024-11-04 02:29:38.158679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:51.184 [2024-11-04 02:29:38.158686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:51.184 [2024-11-04 02:29:38.158700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:51.184 [2024-11-04 02:29:38.158707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:51.184 [2024-11-04 02:29:38.158714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:51.184 [2024-11-04 02:29:38.158720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:51.184 [2024-11-04 02:29:38.158730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:51.184 [2024-11-04 02:29:38.158742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:51.184 [2024-11-04 02:29:38.158756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:51.184 [2024-11-04 02:29:38.158763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:51.184 [2024-11-04 02:29:38.158778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.184 [2024-11-04 02:29:38.158791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:51.184 [2024-11-04 02:29:38.158798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.184 [2024-11-04 02:29:38.158811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:51.184 [2024-11-04 02:29:38.158817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.184 [2024-11-04 02:29:38.158829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:51.184 [2024-11-04 02:29:38.158836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.184 [2024-11-04 02:29:38.158849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:51.184 [2024-11-04 02:29:38.158855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:51.184 [2024-11-04 02:29:38.158897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:51.184 [2024-11-04 02:29:38.158904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:51.184 [2024-11-04 02:29:38.158911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:51.184 [2024-11-04 02:29:38.158919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:51.184 [2024-11-04 02:29:38.158926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:51.184 [2024-11-04 02:29:38.158934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:51.184 [2024-11-04 02:29:38.158948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:51.184 [2024-11-04 02:29:38.158954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158960] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:51.184 [2024-11-04 02:29:38.158969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:51.184 [2024-11-04 02:29:38.158977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:51.184 [2024-11-04 02:29:38.158986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.184 [2024-11-04 02:29:38.158994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:51.184 [2024-11-04 02:29:38.159001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:51.184 [2024-11-04 02:29:38.159008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:51.184 [2024-11-04 02:29:38.159015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:51.184 [2024-11-04 02:29:38.159022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:51.184 [2024-11-04 02:29:38.159030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:51.185 [2024-11-04 02:29:38.159038] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:51.185 [2024-11-04 02:29:38.159048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:51.185 [2024-11-04 02:29:38.159057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:51.185 [2024-11-04 02:29:38.159083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:51.185 [2024-11-04 02:29:38.159092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:51.185 [2024-11-04 02:29:38.159099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:51.185 [2024-11-04 02:29:38.159106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:51.185 [2024-11-04 02:29:38.159133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:51.185 [2024-11-04 02:29:38.159141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:51.185 [2024-11-04 02:29:38.159148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:51.185 [2024-11-04 02:29:38.159155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:51.185 [2024-11-04 02:29:38.159163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:51.185 [2024-11-04 02:29:38.159170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:51.185 [2024-11-04 02:29:38.159178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:51.185 [2024-11-04 02:29:38.159186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:51.185 [2024-11-04 02:29:38.159193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:51.185 [2024-11-04 02:29:38.159200] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:51.185 [2024-11-04 02:29:38.159208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:51.185 [2024-11-04 02:29:38.159219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:51.185 [2024-11-04 02:29:38.159227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:51.185 [2024-11-04 02:29:38.159234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:51.185 [2024-11-04 02:29:38.159242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:51.185 [2024-11-04 02:29:38.159250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.185 [2024-11-04 02:29:38.159257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:51.185 [2024-11-04 02:29:38.159265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:18:51.185 [2024-11-04 02:29:38.159274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.185 [2024-11-04 02:29:38.191378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.185 [2024-11-04 02:29:38.191429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:51.185 [2024-11-04 02:29:38.191441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.059 ms 00:18:51.185 [2024-11-04 02:29:38.191450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.185 [2024-11-04 02:29:38.191539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.185 [2024-11-04 02:29:38.191553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:51.185 [2024-11-04 02:29:38.191562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:51.185 [2024-11-04 02:29:38.191570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.185 [2024-11-04 02:29:38.239150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.185 [2024-11-04 02:29:38.239187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:51.185 [2024-11-04 02:29:38.239199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.520 ms 00:18:51.185 [2024-11-04 02:29:38.239207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.185 [2024-11-04 02:29:38.239246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.185 [2024-11-04 02:29:38.239255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:51.185 [2024-11-04 02:29:38.239263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:51.185 [2024-11-04 02:29:38.239274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.185 [2024-11-04 02:29:38.239641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.185 [2024-11-04 02:29:38.239662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:51.185 [2024-11-04 02:29:38.239671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:18:51.185 [2024-11-04 02:29:38.239679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.185 [2024-11-04 02:29:38.239810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.185 [2024-11-04 02:29:38.239819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:51.185 [2024-11-04 02:29:38.239828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:18:51.185 [2024-11-04 02:29:38.239835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.185 [2024-11-04 02:29:38.252950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.185 [2024-11-04 02:29:38.253087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:51.185 [2024-11-04 02:29:38.253104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.093 ms 00:18:51.185 [2024-11-04 02:29:38.253115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.185 [2024-11-04 02:29:38.265853] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:51.185 [2024-11-04 02:29:38.265895] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:51.185 [2024-11-04 02:29:38.265907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.185 [2024-11-04 02:29:38.265915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:51.185 [2024-11-04 02:29:38.265923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.699 ms 00:18:51.185 [2024-11-04 02:29:38.265930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.446 [2024-11-04 02:29:38.292818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.292964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:51.447 [2024-11-04 02:29:38.292986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.853 ms 00:18:51.447 [2024-11-04 02:29:38.292994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.304879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.304918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:51.447 [2024-11-04 02:29:38.304928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.850 ms 00:18:51.447 [2024-11-04 02:29:38.304935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.316512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.316635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:51.447 [2024-11-04 02:29:38.316650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.543 ms 00:18:51.447 [2024-11-04 02:29:38.316657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.317272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.317294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:51.447 [2024-11-04 02:29:38.317303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:18:51.447 [2024-11-04 02:29:38.317310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.373994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.374044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:51.447 [2024-11-04 02:29:38.374058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.666 ms 00:18:51.447 [2024-11-04 02:29:38.374067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.384628] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:51.447 [2024-11-04 02:29:38.387202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.387337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:51.447 [2024-11-04 02:29:38.387354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.084 ms 00:18:51.447 [2024-11-04 02:29:38.387362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.387464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.387475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:51.447 [2024-11-04 02:29:38.387484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:51.447 [2024-11-04 02:29:38.387492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.387561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.387574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:51.447 [2024-11-04 02:29:38.387582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:51.447 [2024-11-04 02:29:38.387590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.387611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.387620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:51.447 [2024-11-04 02:29:38.387628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:51.447 [2024-11-04 02:29:38.387636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.387666] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:51.447 [2024-11-04 02:29:38.387676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.387684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:51.447 [2024-11-04 02:29:38.387695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:51.447 [2024-11-04 02:29:38.387702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.412084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.412114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:51.447 [2024-11-04 02:29:38.412126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.343 ms 00:18:51.447 [2024-11-04 02:29:38.412134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.412204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.447 [2024-11-04 02:29:38.412214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:51.447 [2024-11-04 02:29:38.412222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:51.447 [2024-11-04 02:29:38.412229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.447 [2024-11-04 02:29:38.413155] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.952 ms, result 0 00:18:52.390  [2024-11-04T02:29:40.451Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-04T02:29:41.838Z] Copying: 31/1024 [MB] (15 MBps) [2024-11-04T02:29:42.779Z] Copying: 41/1024 [MB] (10 MBps) [2024-11-04T02:29:43.731Z] Copying: 52/1024 [MB] (11 MBps) [2024-11-04T02:29:44.674Z] Copying: 84/1024 [MB] (31 MBps) [2024-11-04T02:29:45.617Z] Copying: 95/1024 [MB] (11 MBps) [2024-11-04T02:29:46.563Z] Copying: 106/1024 [MB] (10 MBps) [2024-11-04T02:29:47.505Z] Copying: 120/1024 [MB] (14 MBps) [2024-11-04T02:29:48.448Z] Copying: 142/1024 [MB] (21 MBps) [2024-11-04T02:29:49.821Z] Copying: 165/1024 [MB] (22 MBps) [2024-11-04T02:29:50.764Z] Copying: 219/1024 [MB] (53 MBps) [2024-11-04T02:29:51.707Z] Copying: 238/1024 [MB] (19 MBps) [2024-11-04T02:29:52.650Z] Copying: 252/1024 [MB] (13 MBps) [2024-11-04T02:29:53.591Z] Copying: 269/1024 [MB] (17 MBps) [2024-11-04T02:29:54.532Z] Copying: 292/1024 [MB] (22 MBps) [2024-11-04T02:29:55.475Z] Copying: 312/1024 [MB] (19 MBps) [2024-11-04T02:29:56.861Z] Copying: 331/1024 [MB] (19 MBps) [2024-11-04T02:29:57.433Z] Copying: 350/1024 [MB] (18 MBps) [2024-11-04T02:29:58.819Z] Copying: 369/1024 [MB] (19 MBps) [2024-11-04T02:29:59.760Z] Copying: 390/1024 [MB] (20 MBps) [2024-11-04T02:30:00.702Z] Copying: 414/1024 [MB] (24 MBps) [2024-11-04T02:30:01.646Z] Copying: 431/1024 [MB] (16 MBps) [2024-11-04T02:30:02.589Z] Copying: 451/1024 [MB] (20 MBps) [2024-11-04T02:30:03.597Z] Copying: 466/1024 [MB] (14 MBps) [2024-11-04T02:30:04.540Z] Copying: 482/1024 [MB] (15 MBps) [2024-11-04T02:30:05.486Z] Copying: 499/1024 [MB] (17 MBps) [2024-11-04T02:30:06.430Z] Copying: 511/1024 [MB] (11 MBps) [2024-11-04T02:30:07.816Z] Copying: 530/1024 [MB] (19 MBps) [2024-11-04T02:30:08.762Z] Copying: 543/1024 [MB] (12 MBps) [2024-11-04T02:30:09.706Z] Copying: 559/1024 [MB] (16 MBps) [2024-11-04T02:30:10.650Z] Copying: 575/1024 [MB] (15 MBps) [2024-11-04T02:30:11.596Z] Copying: 589/1024 [MB] (14 MBps) [2024-11-04T02:30:12.540Z] Copying: 599/1024 [MB] (10 MBps) [2024-11-04T02:30:13.486Z] Copying: 609/1024 [MB] (10 MBps) [2024-11-04T02:30:14.431Z] Copying: 619/1024 [MB] (10 MBps) [2024-11-04T02:30:15.821Z] Copying: 630/1024 [MB] (10 MBps) [2024-11-04T02:30:16.763Z] Copying: 640/1024 [MB] (10 MBps) [2024-11-04T02:30:17.710Z] Copying: 650/1024 [MB] (10 MBps) [2024-11-04T02:30:18.652Z] Copying: 660/1024 [MB] (10 MBps) [2024-11-04T02:30:19.597Z] Copying: 670/1024 [MB] (10 MBps) [2024-11-04T02:30:20.557Z] Copying: 681/1024 [MB] (10 MBps) [2024-11-04T02:30:21.499Z] Copying: 691/1024 [MB] (10 MBps) [2024-11-04T02:30:22.431Z] Copying: 709/1024 [MB] (18 MBps) [2024-11-04T02:30:23.858Z] Copying: 762/1024 [MB] (52 MBps) [2024-11-04T02:30:24.432Z] Copying: 782/1024 [MB] (20 MBps) [2024-11-04T02:30:25.816Z] Copying: 794/1024 [MB] (11 MBps) [2024-11-04T02:30:26.759Z] Copying: 804/1024 [MB] (10 MBps) [2024-11-04T02:30:27.703Z] Copying: 817/1024 [MB] (12 MBps) [2024-11-04T02:30:28.645Z] Copying: 837/1024 [MB] (20 MBps) [2024-11-04T02:30:29.585Z] Copying: 852/1024 [MB] (15 MBps) [2024-11-04T02:30:30.524Z] Copying: 867/1024 [MB] (15 MBps) [2024-11-04T02:30:31.468Z] Copying: 881/1024 [MB] (13 MBps) [2024-11-04T02:30:32.853Z] Copying: 897/1024 [MB] (15 MBps) [2024-11-04T02:30:33.795Z] Copying: 915/1024 [MB] (18 MBps) [2024-11-04T02:30:34.741Z] Copying: 933/1024 [MB] (17 MBps) [2024-11-04T02:30:35.682Z] Copying: 944/1024 [MB] (10 MBps) [2024-11-04T02:30:36.627Z] Copying: 963/1024 [MB] (18 MBps) [2024-11-04T02:30:37.573Z] Copying: 976/1024 [MB] (13 MBps) [2024-11-04T02:30:38.514Z] Copying: 994/1024 [MB] (18 MBps) [2024-11-04T02:30:39.455Z] Copying: 1016/1024 [MB] (22 MBps) [2024-11-04T02:30:39.455Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-04 02:30:39.117346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.117489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:52.344 [2024-11-04 02:30:39.117561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.344 [2024-11-04 02:30:39.117586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.117628] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.344 [2024-11-04 02:30:39.120762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.120941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:52.344 [2024-11-04 02:30:39.121018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:19:52.344 [2024-11-04 02:30:39.121049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.123909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.124045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:52.344 [2024-11-04 02:30:39.124106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.804 ms 00:19:52.344 [2024-11-04 02:30:39.124128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.142767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.142955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:52.344 [2024-11-04 02:30:39.143058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.606 ms 00:19:52.344 [2024-11-04 02:30:39.143085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.149238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.149388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:52.344 [2024-11-04 02:30:39.149456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:19:52.344 [2024-11-04 02:30:39.149480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.175464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.175617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:52.344 [2024-11-04 02:30:39.175676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.912 ms 00:19:52.344 [2024-11-04 02:30:39.175697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.191352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.191506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:52.344 [2024-11-04 02:30:39.191568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.564 ms 00:19:52.344 [2024-11-04 02:30:39.191589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.191780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.191917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:52.344 [2024-11-04 02:30:39.191944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:52.344 [2024-11-04 02:30:39.191970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.217599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.217750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:52.344 [2024-11-04 02:30:39.217807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.574 ms 00:19:52.344 [2024-11-04 02:30:39.217829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.243265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.243421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:52.344 [2024-11-04 02:30:39.243496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.337 ms 00:19:52.344 [2024-11-04 02:30:39.243518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.267891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.268039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:52.344 [2024-11-04 02:30:39.268097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.269 ms 00:19:52.344 [2024-11-04 02:30:39.268118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.292620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.344 [2024-11-04 02:30:39.292766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:52.344 [2024-11-04 02:30:39.292822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.371 ms 00:19:52.344 [2024-11-04 02:30:39.292845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.344 [2024-11-04 02:30:39.292944] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:52.344 [2024-11-04 02:30:39.292977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:52.344 [2024-11-04 02:30:39.293010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:52.344 [2024-11-04 02:30:39.293039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:52.344 [2024-11-04 02:30:39.293067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:52.344 [2024-11-04 02:30:39.293149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:52.344 [2024-11-04 02:30:39.293179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:52.344 [2024-11-04 02:30:39.293208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:52.344 [2024-11-04 02:30:39.293237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:52.344 [2024-11-04 02:30:39.293364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:52.344 [2024-11-04 02:30:39.293394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:52.344 [2024-11-04 02:30:39.293424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:52.344 [2024-11-04 02:30:39.293452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.293996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:52.345 [2024-11-04 02:30:39.294362] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:52.345 [2024-11-04 02:30:39.294377] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eb37cb69-a1b4-437f-9116-fc28bc24e968 00:19:52.346 [2024-11-04 02:30:39.294385] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:52.346 [2024-11-04 02:30:39.294397] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:52.346 [2024-11-04 02:30:39.294405] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:52.346 [2024-11-04 02:30:39.294414] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:52.346 [2024-11-04 02:30:39.294421] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:52.346 [2024-11-04 02:30:39.294429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:52.346 [2024-11-04 02:30:39.294455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:52.346 [2024-11-04 02:30:39.294469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:52.346 [2024-11-04 02:30:39.294476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:52.346 [2024-11-04 02:30:39.294484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.346 [2024-11-04 02:30:39.294492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:52.346 [2024-11-04 02:30:39.294502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:19:52.346 [2024-11-04 02:30:39.294509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-11-04 02:30:39.308110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.346 [2024-11-04 02:30:39.308154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:52.346 [2024-11-04 02:30:39.308166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.577 ms 00:19:52.346 [2024-11-04 02:30:39.308175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-11-04 02:30:39.308569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.346 [2024-11-04 02:30:39.308579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:52.346 [2024-11-04 02:30:39.308588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:19:52.346 [2024-11-04 02:30:39.308596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-11-04 02:30:39.345049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.346 [2024-11-04 02:30:39.345094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.346 [2024-11-04 02:30:39.345107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.346 [2024-11-04 02:30:39.345116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-11-04 02:30:39.345188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.346 [2024-11-04 02:30:39.345197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.346 [2024-11-04 02:30:39.345207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.346 [2024-11-04 02:30:39.345216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-11-04 02:30:39.345288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.346 [2024-11-04 02:30:39.345300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.346 [2024-11-04 02:30:39.345310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.346 [2024-11-04 02:30:39.345319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-11-04 02:30:39.345335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.346 [2024-11-04 02:30:39.345344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.346 [2024-11-04 02:30:39.345353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.346 [2024-11-04 02:30:39.345361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.346 [2024-11-04 02:30:39.411335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.346 [2024-11-04 02:30:39.411374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.346 [2024-11-04 02:30:39.411384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.346 [2024-11-04 02:30:39.411391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.605 [2024-11-04 02:30:39.461748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.605 [2024-11-04 02:30:39.461875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.605 [2024-11-04 02:30:39.461888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.605 [2024-11-04 02:30:39.461895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.605 [2024-11-04 02:30:39.461950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.605 [2024-11-04 02:30:39.461962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.605 [2024-11-04 02:30:39.461968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.605 [2024-11-04 02:30:39.461974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.605 [2024-11-04 02:30:39.462004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.605 [2024-11-04 02:30:39.462011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.605 [2024-11-04 02:30:39.462017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.605 [2024-11-04 02:30:39.462023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.605 [2024-11-04 02:30:39.462093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.605 [2024-11-04 02:30:39.462100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.605 [2024-11-04 02:30:39.462109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.605 [2024-11-04 02:30:39.462115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.605 [2024-11-04 02:30:39.462137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.605 [2024-11-04 02:30:39.462143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:52.605 [2024-11-04 02:30:39.462149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.605 [2024-11-04 02:30:39.462156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.605 [2024-11-04 02:30:39.462184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.605 [2024-11-04 02:30:39.462191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.605 [2024-11-04 02:30:39.462199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.605 [2024-11-04 02:30:39.462204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.605 [2024-11-04 02:30:39.462238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.605 [2024-11-04 02:30:39.462245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.605 [2024-11-04 02:30:39.462251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.605 [2024-11-04 02:30:39.462256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.605 [2024-11-04 02:30:39.462348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.984 ms, result 0 00:19:53.173 00:19:53.174 00:19:53.174 02:30:40 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:53.174 [2024-11-04 02:30:40.249324] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:19:53.174 [2024-11-04 02:30:40.249449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75406 ] 00:19:53.434 [2024-11-04 02:30:40.410464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.434 [2024-11-04 02:30:40.531761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.008 [2024-11-04 02:30:40.823039] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.008 [2024-11-04 02:30:40.823120] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.008 [2024-11-04 02:30:40.984262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.008 [2024-11-04 02:30:40.984327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:54.008 [2024-11-04 02:30:40.984346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:54.008 [2024-11-04 02:30:40.984355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.008 [2024-11-04 02:30:40.984411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.008 [2024-11-04 02:30:40.984421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.008 [2024-11-04 02:30:40.984433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:54.008 [2024-11-04 02:30:40.984441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.008 [2024-11-04 02:30:40.984461] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:54.008 [2024-11-04 02:30:40.985240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:54.008 [2024-11-04 02:30:40.985263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.008 [2024-11-04 02:30:40.985272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.008 [2024-11-04 02:30:40.985283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:19:54.008 [2024-11-04 02:30:40.985291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.008 [2024-11-04 02:30:40.987039] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:54.008 [2024-11-04 02:30:41.001227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.008 [2024-11-04 02:30:41.001277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:54.008 [2024-11-04 02:30:41.001291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.190 ms 00:19:54.008 [2024-11-04 02:30:41.001300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.008 [2024-11-04 02:30:41.001378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.008 [2024-11-04 02:30:41.001391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:54.008 [2024-11-04 02:30:41.001400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:54.008 [2024-11-04 02:30:41.001409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.008 [2024-11-04 02:30:41.009650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.008 [2024-11-04 02:30:41.009695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.008 [2024-11-04 02:30:41.009706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.163 ms 00:19:54.008 [2024-11-04 02:30:41.009715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.008 [2024-11-04 02:30:41.009801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.008 [2024-11-04 02:30:41.009810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.008 [2024-11-04 02:30:41.009819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:54.008 [2024-11-04 02:30:41.009828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.008 [2024-11-04 02:30:41.009898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.008 [2024-11-04 02:30:41.009909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:54.008 [2024-11-04 02:30:41.009918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:54.008 [2024-11-04 02:30:41.009926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.008 [2024-11-04 02:30:41.009949] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:54.008 [2024-11-04 02:30:41.014116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.008 [2024-11-04 02:30:41.014157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.008 [2024-11-04 02:30:41.014169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.172 ms 00:19:54.008 [2024-11-04 02:30:41.014181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.008 [2024-11-04 02:30:41.014216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.008 [2024-11-04 02:30:41.014224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:54.008 [2024-11-04 02:30:41.014233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:54.008 [2024-11-04 02:30:41.014241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.008 [2024-11-04 02:30:41.014292] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:54.008 [2024-11-04 02:30:41.014314] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:54.008 [2024-11-04 02:30:41.014352] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:54.008 [2024-11-04 02:30:41.014371] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:54.008 [2024-11-04 02:30:41.014477] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:54.008 [2024-11-04 02:30:41.014488] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:54.008 [2024-11-04 02:30:41.014500] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:54.009 [2024-11-04 02:30:41.014511] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:54.009 [2024-11-04 02:30:41.014520] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:54.009 [2024-11-04 02:30:41.014528] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:54.009 [2024-11-04 02:30:41.014536] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:54.009 [2024-11-04 02:30:41.014544] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:54.009 [2024-11-04 02:30:41.014551] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:54.009 [2024-11-04 02:30:41.014563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.009 [2024-11-04 02:30:41.014571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:54.009 [2024-11-04 02:30:41.014579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:19:54.009 [2024-11-04 02:30:41.014586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.009 [2024-11-04 02:30:41.014669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.009 [2024-11-04 02:30:41.014678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:54.009 [2024-11-04 02:30:41.014686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:54.009 [2024-11-04 02:30:41.014693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.009 [2024-11-04 02:30:41.014796] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:54.009 [2024-11-04 02:30:41.014809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:54.009 [2024-11-04 02:30:41.014817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.009 [2024-11-04 02:30:41.014825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.009 [2024-11-04 02:30:41.014832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:54.009 [2024-11-04 02:30:41.014839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:54.009 [2024-11-04 02:30:41.014846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:54.009 [2024-11-04 02:30:41.014853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:54.009 [2024-11-04 02:30:41.014860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:54.009 [2024-11-04 02:30:41.014910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.009 [2024-11-04 02:30:41.014918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:54.009 [2024-11-04 02:30:41.014925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:54.009 [2024-11-04 02:30:41.014935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.009 [2024-11-04 02:30:41.014943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:54.009 [2024-11-04 02:30:41.014951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:54.009 [2024-11-04 02:30:41.014966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.009 [2024-11-04 02:30:41.014973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:54.009 [2024-11-04 02:30:41.014980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:54.009 [2024-11-04 02:30:41.014988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.009 [2024-11-04 02:30:41.014995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:54.009 [2024-11-04 02:30:41.015002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:54.009 [2024-11-04 02:30:41.015010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.009 [2024-11-04 02:30:41.015017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:54.009 [2024-11-04 02:30:41.015023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:54.009 [2024-11-04 02:30:41.015030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.009 [2024-11-04 02:30:41.015037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:54.009 [2024-11-04 02:30:41.015044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:54.009 [2024-11-04 02:30:41.015051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.009 [2024-11-04 02:30:41.015058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:54.009 [2024-11-04 02:30:41.015065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:54.009 [2024-11-04 02:30:41.015072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.009 [2024-11-04 02:30:41.015079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:54.009 [2024-11-04 02:30:41.015086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:54.009 [2024-11-04 02:30:41.015093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.009 [2024-11-04 02:30:41.015100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:54.009 [2024-11-04 02:30:41.015107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:54.009 [2024-11-04 02:30:41.015114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.009 [2024-11-04 02:30:41.015121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:54.009 [2024-11-04 02:30:41.015127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:54.009 [2024-11-04 02:30:41.015134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.009 [2024-11-04 02:30:41.015141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:54.009 [2024-11-04 02:30:41.015148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:54.009 [2024-11-04 02:30:41.015154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.009 [2024-11-04 02:30:41.015161] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:54.009 [2024-11-04 02:30:41.015169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:54.009 [2024-11-04 02:30:41.015177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.009 [2024-11-04 02:30:41.015185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.009 [2024-11-04 02:30:41.015197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:54.009 [2024-11-04 02:30:41.015204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:54.009 [2024-11-04 02:30:41.015211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:54.009 [2024-11-04 02:30:41.015217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:54.009 [2024-11-04 02:30:41.015224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:54.009 [2024-11-04 02:30:41.015230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:54.009 [2024-11-04 02:30:41.015239] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:54.009 [2024-11-04 02:30:41.015249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.009 [2024-11-04 02:30:41.015258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:54.009 [2024-11-04 02:30:41.015265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:54.009 [2024-11-04 02:30:41.015272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:54.009 [2024-11-04 02:30:41.015279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:54.009 [2024-11-04 02:30:41.015286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:54.009 [2024-11-04 02:30:41.015293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:54.009 [2024-11-04 02:30:41.015300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:54.009 [2024-11-04 02:30:41.015308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:54.009 [2024-11-04 02:30:41.015316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:54.009 [2024-11-04 02:30:41.015324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:54.009 [2024-11-04 02:30:41.015333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:54.009 [2024-11-04 02:30:41.015341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:54.009 [2024-11-04 02:30:41.015349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:54.009 [2024-11-04 02:30:41.015356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:54.009 [2024-11-04 02:30:41.015364] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:54.009 [2024-11-04 02:30:41.015373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.009 [2024-11-04 02:30:41.015384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:54.009 [2024-11-04 02:30:41.015392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:54.009 [2024-11-04 02:30:41.015399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:54.009 [2024-11-04 02:30:41.015406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:54.009 [2024-11-04 02:30:41.015414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.009 [2024-11-04 02:30:41.015422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:54.009 [2024-11-04 02:30:41.015430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:19:54.009 [2024-11-04 02:30:41.015437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.009 [2024-11-04 02:30:41.048031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.009 [2024-11-04 02:30:41.048271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.009 [2024-11-04 02:30:41.048292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.548 ms 00:19:54.009 [2024-11-04 02:30:41.048302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.009 [2024-11-04 02:30:41.048400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.010 [2024-11-04 02:30:41.048415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:54.010 [2024-11-04 02:30:41.048425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:54.010 [2024-11-04 02:30:41.048433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.010 [2024-11-04 02:30:41.093310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.010 [2024-11-04 02:30:41.093349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.010 [2024-11-04 02:30:41.093361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.814 ms 00:19:54.010 [2024-11-04 02:30:41.093369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.010 [2024-11-04 02:30:41.093408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.010 [2024-11-04 02:30:41.093417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.010 [2024-11-04 02:30:41.093425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:54.010 [2024-11-04 02:30:41.093435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.010 [2024-11-04 02:30:41.093822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.010 [2024-11-04 02:30:41.093838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.010 [2024-11-04 02:30:41.093847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:19:54.010 [2024-11-04 02:30:41.093855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.010 [2024-11-04 02:30:41.094001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.010 [2024-11-04 02:30:41.094011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.010 [2024-11-04 02:30:41.094019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:54.010 [2024-11-04 02:30:41.094026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.010 [2024-11-04 02:30:41.107313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.010 [2024-11-04 02:30:41.107345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.010 [2024-11-04 02:30:41.107355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.263 ms 00:19:54.010 [2024-11-04 02:30:41.107365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.273 [2024-11-04 02:30:41.120425] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:54.274 [2024-11-04 02:30:41.120460] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:54.274 [2024-11-04 02:30:41.120473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.120480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:54.274 [2024-11-04 02:30:41.120489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.018 ms 00:19:54.274 [2024-11-04 02:30:41.120496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.144792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.144831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:54.274 [2024-11-04 02:30:41.144842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.255 ms 00:19:54.274 [2024-11-04 02:30:41.144849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.156830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.156874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:54.274 [2024-11-04 02:30:41.156885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.930 ms 00:19:54.274 [2024-11-04 02:30:41.156892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.168688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.168720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:54.274 [2024-11-04 02:30:41.168730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.762 ms 00:19:54.274 [2024-11-04 02:30:41.168737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.169356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.169377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:54.274 [2024-11-04 02:30:41.169386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:19:54.274 [2024-11-04 02:30:41.169394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.227342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.227403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:54.274 [2024-11-04 02:30:41.227418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.927 ms 00:19:54.274 [2024-11-04 02:30:41.227432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.238516] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:54.274 [2024-11-04 02:30:41.241351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.241391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:54.274 [2024-11-04 02:30:41.241403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.864 ms 00:19:54.274 [2024-11-04 02:30:41.241411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.241515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.241527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:54.274 [2024-11-04 02:30:41.241537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:54.274 [2024-11-04 02:30:41.241545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.241618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.241629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:54.274 [2024-11-04 02:30:41.241638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:54.274 [2024-11-04 02:30:41.241646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.241665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.241674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:54.274 [2024-11-04 02:30:41.241682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:54.274 [2024-11-04 02:30:41.241690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.241722] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:54.274 [2024-11-04 02:30:41.241736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.241744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:54.274 [2024-11-04 02:30:41.241752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:54.274 [2024-11-04 02:30:41.241760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.267245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.267413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:54.274 [2024-11-04 02:30:41.267477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.467 ms 00:19:54.274 [2024-11-04 02:30:41.267504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.267603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.274 [2024-11-04 02:30:41.267630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:54.274 [2024-11-04 02:30:41.267650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:54.274 [2024-11-04 02:30:41.267670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.274 [2024-11-04 02:30:41.268950] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.181 ms, result 0 00:19:55.666  [2024-11-04T02:30:43.756Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-04T02:30:44.702Z] Copying: 34/1024 [MB] (17 MBps) [2024-11-04T02:30:45.646Z] Copying: 48/1024 [MB] (14 MBps) [2024-11-04T02:30:46.589Z] Copying: 72/1024 [MB] (23 MBps) [2024-11-04T02:30:47.533Z] Copying: 84/1024 [MB] (11 MBps) [2024-11-04T02:30:48.476Z] Copying: 102/1024 [MB] (18 MBps) [2024-11-04T02:30:49.864Z] Copying: 123/1024 [MB] (20 MBps) [2024-11-04T02:30:50.808Z] Copying: 140/1024 [MB] (17 MBps) [2024-11-04T02:30:51.754Z] Copying: 154/1024 [MB] (14 MBps) [2024-11-04T02:30:52.699Z] Copying: 165/1024 [MB] (10 MBps) [2024-11-04T02:30:53.645Z] Copying: 177/1024 [MB] (12 MBps) [2024-11-04T02:30:54.592Z] Copying: 193/1024 [MB] (15 MBps) [2024-11-04T02:30:55.538Z] Copying: 204/1024 [MB] (11 MBps) [2024-11-04T02:30:56.483Z] Copying: 217/1024 [MB] (12 MBps) [2024-11-04T02:30:57.871Z] Copying: 233/1024 [MB] (15 MBps) [2024-11-04T02:30:58.815Z] Copying: 247/1024 [MB] (14 MBps) [2024-11-04T02:30:59.760Z] Copying: 258/1024 [MB] (10 MBps) [2024-11-04T02:31:00.702Z] Copying: 269/1024 [MB] (10 MBps) [2024-11-04T02:31:01.643Z] Copying: 279/1024 [MB] (10 MBps) [2024-11-04T02:31:02.586Z] Copying: 290/1024 [MB] (10 MBps) [2024-11-04T02:31:03.563Z] Copying: 301/1024 [MB] (10 MBps) [2024-11-04T02:31:04.504Z] Copying: 311/1024 [MB] (10 MBps) [2024-11-04T02:31:05.888Z] Copying: 322/1024 [MB] (10 MBps) [2024-11-04T02:31:06.459Z] Copying: 337/1024 [MB] (14 MBps) [2024-11-04T02:31:07.844Z] Copying: 353/1024 [MB] (16 MBps) [2024-11-04T02:31:08.789Z] Copying: 369/1024 [MB] (15 MBps) [2024-11-04T02:31:09.734Z] Copying: 381/1024 [MB] (12 MBps) [2024-11-04T02:31:10.677Z] Copying: 400/1024 [MB] (19 MBps) [2024-11-04T02:31:11.622Z] Copying: 417/1024 [MB] (17 MBps) [2024-11-04T02:31:12.565Z] Copying: 437/1024 [MB] (20 MBps) [2024-11-04T02:31:13.509Z] Copying: 458/1024 [MB] (20 MBps) [2024-11-04T02:31:14.897Z] Copying: 477/1024 [MB] (19 MBps) [2024-11-04T02:31:15.468Z] Copying: 488/1024 [MB] (10 MBps) [2024-11-04T02:31:16.857Z] Copying: 503/1024 [MB] (15 MBps) [2024-11-04T02:31:17.799Z] Copying: 523/1024 [MB] (19 MBps) [2024-11-04T02:31:18.744Z] Copying: 546/1024 [MB] (22 MBps) [2024-11-04T02:31:19.688Z] Copying: 557/1024 [MB] (10 MBps) [2024-11-04T02:31:20.632Z] Copying: 567/1024 [MB] (10 MBps) [2024-11-04T02:31:21.574Z] Copying: 583/1024 [MB] (15 MBps) [2024-11-04T02:31:22.519Z] Copying: 602/1024 [MB] (19 MBps) [2024-11-04T02:31:23.482Z] Copying: 620/1024 [MB] (17 MBps) [2024-11-04T02:31:24.891Z] Copying: 639/1024 [MB] (19 MBps) [2024-11-04T02:31:25.461Z] Copying: 649/1024 [MB] (10 MBps) [2024-11-04T02:31:26.845Z] Copying: 660/1024 [MB] (10 MBps) [2024-11-04T02:31:27.788Z] Copying: 670/1024 [MB] (10 MBps) [2024-11-04T02:31:28.731Z] Copying: 686/1024 [MB] (15 MBps) [2024-11-04T02:31:29.676Z] Copying: 702/1024 [MB] (15 MBps) [2024-11-04T02:31:30.619Z] Copying: 713/1024 [MB] (10 MBps) [2024-11-04T02:31:31.562Z] Copying: 723/1024 [MB] (10 MBps) [2024-11-04T02:31:32.506Z] Copying: 735/1024 [MB] (11 MBps) [2024-11-04T02:31:33.893Z] Copying: 751/1024 [MB] (16 MBps) [2024-11-04T02:31:34.466Z] Copying: 765/1024 [MB] (14 MBps) [2024-11-04T02:31:35.852Z] Copying: 779/1024 [MB] (13 MBps) [2024-11-04T02:31:36.796Z] Copying: 795/1024 [MB] (16 MBps) [2024-11-04T02:31:37.740Z] Copying: 810/1024 [MB] (14 MBps) [2024-11-04T02:31:38.682Z] Copying: 821/1024 [MB] (11 MBps) [2024-11-04T02:31:39.624Z] Copying: 842/1024 [MB] (21 MBps) [2024-11-04T02:31:40.566Z] Copying: 863/1024 [MB] (20 MBps) [2024-11-04T02:31:41.510Z] Copying: 877/1024 [MB] (14 MBps) [2024-11-04T02:31:42.896Z] Copying: 892/1024 [MB] (14 MBps) [2024-11-04T02:31:43.468Z] Copying: 910/1024 [MB] (18 MBps) [2024-11-04T02:31:44.894Z] Copying: 928/1024 [MB] (18 MBps) [2024-11-04T02:31:45.467Z] Copying: 942/1024 [MB] (14 MBps) [2024-11-04T02:31:46.854Z] Copying: 956/1024 [MB] (14 MBps) [2024-11-04T02:31:47.796Z] Copying: 969/1024 [MB] (12 MBps) [2024-11-04T02:31:48.742Z] Copying: 983/1024 [MB] (14 MBps) [2024-11-04T02:31:49.686Z] Copying: 996/1024 [MB] (12 MBps) [2024-11-04T02:31:50.628Z] Copying: 1013/1024 [MB] (17 MBps) [2024-11-04T02:31:51.202Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-04 02:31:50.953117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:50.953491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:04.091 [2024-11-04 02:31:50.953520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:04.091 [2024-11-04 02:31:50.953532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:50.953567] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:04.091 [2024-11-04 02:31:50.956791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:50.956986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:04.091 [2024-11-04 02:31:50.957010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.205 ms 00:21:04.091 [2024-11-04 02:31:50.957030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:50.957283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:50.957294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:04.091 [2024-11-04 02:31:50.957304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:21:04.091 [2024-11-04 02:31:50.957312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:50.960770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:50.960919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:04.091 [2024-11-04 02:31:50.960935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.443 ms 00:21:04.091 [2024-11-04 02:31:50.960944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:50.967806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:50.967853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:04.091 [2024-11-04 02:31:50.967884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.830 ms 00:21:04.091 [2024-11-04 02:31:50.967895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:50.996813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:50.996884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:04.091 [2024-11-04 02:31:50.996899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.840 ms 00:21:04.091 [2024-11-04 02:31:50.996908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:51.013414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:51.013467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:04.091 [2024-11-04 02:31:51.013482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.449 ms 00:21:04.091 [2024-11-04 02:31:51.013490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:51.013645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:51.013658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:04.091 [2024-11-04 02:31:51.013675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:04.091 [2024-11-04 02:31:51.013684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:51.041912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:51.041963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:04.091 [2024-11-04 02:31:51.041976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.210 ms 00:21:04.091 [2024-11-04 02:31:51.041984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:51.068893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:51.068958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:04.091 [2024-11-04 02:31:51.068973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.857 ms 00:21:04.091 [2024-11-04 02:31:51.068981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:51.096637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:51.096693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:04.091 [2024-11-04 02:31:51.096708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.601 ms 00:21:04.091 [2024-11-04 02:31:51.096717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:51.123077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.091 [2024-11-04 02:31:51.123129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:04.091 [2024-11-04 02:31:51.123144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.251 ms 00:21:04.091 [2024-11-04 02:31:51.123152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.091 [2024-11-04 02:31:51.123203] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:04.091 [2024-11-04 02:31:51.123220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:04.091 [2024-11-04 02:31:51.123456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.123997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.124005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.124013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.124021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.124030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.124039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.124047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.124055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.124063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.124071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:04.092 [2024-11-04 02:31:51.124087] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:04.092 [2024-11-04 02:31:51.124096] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eb37cb69-a1b4-437f-9116-fc28bc24e968 00:21:04.092 [2024-11-04 02:31:51.124108] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:04.092 [2024-11-04 02:31:51.124116] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:04.092 [2024-11-04 02:31:51.124123] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:04.092 [2024-11-04 02:31:51.124131] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:04.092 [2024-11-04 02:31:51.124138] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:04.092 [2024-11-04 02:31:51.124146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:04.092 [2024-11-04 02:31:51.124161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:04.092 [2024-11-04 02:31:51.124168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:04.092 [2024-11-04 02:31:51.124174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:04.092 [2024-11-04 02:31:51.124183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.092 [2024-11-04 02:31:51.124202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:04.092 [2024-11-04 02:31:51.124212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:21:04.092 [2024-11-04 02:31:51.124220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.092 [2024-11-04 02:31:51.138061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.092 [2024-11-04 02:31:51.138259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:04.092 [2024-11-04 02:31:51.138279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.802 ms 00:21:04.092 [2024-11-04 02:31:51.138289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.092 [2024-11-04 02:31:51.138679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.092 [2024-11-04 02:31:51.138690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:04.092 [2024-11-04 02:31:51.138699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:21:04.092 [2024-11-04 02:31:51.138706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.092 [2024-11-04 02:31:51.175489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.092 [2024-11-04 02:31:51.175540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.092 [2024-11-04 02:31:51.175553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.093 [2024-11-04 02:31:51.175564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.093 [2024-11-04 02:31:51.175633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.093 [2024-11-04 02:31:51.175644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.093 [2024-11-04 02:31:51.175654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.093 [2024-11-04 02:31:51.175664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.093 [2024-11-04 02:31:51.175780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.093 [2024-11-04 02:31:51.175792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.093 [2024-11-04 02:31:51.175803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.093 [2024-11-04 02:31:51.175813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.093 [2024-11-04 02:31:51.175830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.093 [2024-11-04 02:31:51.175840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.093 [2024-11-04 02:31:51.175850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.093 [2024-11-04 02:31:51.175859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.354 [2024-11-04 02:31:51.263436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.354 [2024-11-04 02:31:51.263499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.354 [2024-11-04 02:31:51.263517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.354 [2024-11-04 02:31:51.263526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.354 [2024-11-04 02:31:51.334497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.354 [2024-11-04 02:31:51.334719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.354 [2024-11-04 02:31:51.334741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.354 [2024-11-04 02:31:51.334751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.354 [2024-11-04 02:31:51.334830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.354 [2024-11-04 02:31:51.334842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.354 [2024-11-04 02:31:51.334851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.354 [2024-11-04 02:31:51.334859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.354 [2024-11-04 02:31:51.334956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.354 [2024-11-04 02:31:51.334968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.354 [2024-11-04 02:31:51.334976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.354 [2024-11-04 02:31:51.334985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.354 [2024-11-04 02:31:51.335094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.354 [2024-11-04 02:31:51.335109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.354 [2024-11-04 02:31:51.335118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.354 [2024-11-04 02:31:51.335126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.354 [2024-11-04 02:31:51.335161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.354 [2024-11-04 02:31:51.335171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:04.354 [2024-11-04 02:31:51.335180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.354 [2024-11-04 02:31:51.335190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.354 [2024-11-04 02:31:51.335231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.354 [2024-11-04 02:31:51.335243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.354 [2024-11-04 02:31:51.335252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.354 [2024-11-04 02:31:51.335260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.354 [2024-11-04 02:31:51.335308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.354 [2024-11-04 02:31:51.335319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.354 [2024-11-04 02:31:51.335328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.354 [2024-11-04 02:31:51.335337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.354 [2024-11-04 02:31:51.335470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.318 ms, result 0 00:21:05.298 00:21:05.298 00:21:05.298 02:31:52 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:07.213 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:07.214 02:31:54 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:07.214 [2024-11-04 02:31:54.317806] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:21:07.214 [2024-11-04 02:31:54.318073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76173 ] 00:21:07.474 [2024-11-04 02:31:54.477996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.474 [2024-11-04 02:31:54.571767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.736 [2024-11-04 02:31:54.839199] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:07.736 [2024-11-04 02:31:54.839453] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:07.998 [2024-11-04 02:31:54.999943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.998 [2024-11-04 02:31:54.999998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:07.998 [2024-11-04 02:31:55.000017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:07.998 [2024-11-04 02:31:55.000026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.998 [2024-11-04 02:31:55.000081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.998 [2024-11-04 02:31:55.000092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.998 [2024-11-04 02:31:55.000103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:07.998 [2024-11-04 02:31:55.000111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.998 [2024-11-04 02:31:55.000131] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:07.998 [2024-11-04 02:31:55.000891] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:07.998 [2024-11-04 02:31:55.000914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.998 [2024-11-04 02:31:55.000922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.998 [2024-11-04 02:31:55.000932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:21:07.998 [2024-11-04 02:31:55.000940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.998 [2024-11-04 02:31:55.002617] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:07.998 [2024-11-04 02:31:55.016615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.999 [2024-11-04 02:31:55.016797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:07.999 [2024-11-04 02:31:55.016819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.000 ms 00:21:07.999 [2024-11-04 02:31:55.016829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.999 [2024-11-04 02:31:55.016988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.999 [2024-11-04 02:31:55.017020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:07.999 [2024-11-04 02:31:55.017030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:07.999 [2024-11-04 02:31:55.017038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.999 [2024-11-04 02:31:55.024805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.999 [2024-11-04 02:31:55.024847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.999 [2024-11-04 02:31:55.024857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.688 ms 00:21:07.999 [2024-11-04 02:31:55.024884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.999 [2024-11-04 02:31:55.024968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.999 [2024-11-04 02:31:55.024977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.999 [2024-11-04 02:31:55.024986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:07.999 [2024-11-04 02:31:55.024994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.999 [2024-11-04 02:31:55.025036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.999 [2024-11-04 02:31:55.025046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:07.999 [2024-11-04 02:31:55.025054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:07.999 [2024-11-04 02:31:55.025062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.999 [2024-11-04 02:31:55.025085] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:07.999 [2024-11-04 02:31:55.029099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.999 [2024-11-04 02:31:55.029136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.999 [2024-11-04 02:31:55.029147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.020 ms 00:21:07.999 [2024-11-04 02:31:55.029157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.999 [2024-11-04 02:31:55.029193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.999 [2024-11-04 02:31:55.029202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:07.999 [2024-11-04 02:31:55.029210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:07.999 [2024-11-04 02:31:55.029218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.999 [2024-11-04 02:31:55.029267] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:07.999 [2024-11-04 02:31:55.029290] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:07.999 [2024-11-04 02:31:55.029327] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:07.999 [2024-11-04 02:31:55.029347] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:07.999 [2024-11-04 02:31:55.029452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:07.999 [2024-11-04 02:31:55.029463] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:07.999 [2024-11-04 02:31:55.029474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:07.999 [2024-11-04 02:31:55.029486] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:07.999 [2024-11-04 02:31:55.029495] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:07.999 [2024-11-04 02:31:55.029504] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:07.999 [2024-11-04 02:31:55.029511] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:07.999 [2024-11-04 02:31:55.029519] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:07.999 [2024-11-04 02:31:55.029527] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:07.999 [2024-11-04 02:31:55.029538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.999 [2024-11-04 02:31:55.029545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:07.999 [2024-11-04 02:31:55.029554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:21:07.999 [2024-11-04 02:31:55.029561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.999 [2024-11-04 02:31:55.029642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.999 [2024-11-04 02:31:55.029651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:07.999 [2024-11-04 02:31:55.029659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:07.999 [2024-11-04 02:31:55.029666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.999 [2024-11-04 02:31:55.029769] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:07.999 [2024-11-04 02:31:55.029782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:07.999 [2024-11-04 02:31:55.029790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.999 [2024-11-04 02:31:55.029798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.999 [2024-11-04 02:31:55.029806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:07.999 [2024-11-04 02:31:55.029813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:07.999 [2024-11-04 02:31:55.029820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:07.999 [2024-11-04 02:31:55.029829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:07.999 [2024-11-04 02:31:55.029837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:07.999 [2024-11-04 02:31:55.029844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.999 [2024-11-04 02:31:55.029851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:07.999 [2024-11-04 02:31:55.029858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:07.999 [2024-11-04 02:31:55.029888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.999 [2024-11-04 02:31:55.029896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:07.999 [2024-11-04 02:31:55.029906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:07.999 [2024-11-04 02:31:55.029920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.999 [2024-11-04 02:31:55.029928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:07.999 [2024-11-04 02:31:55.029936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:07.999 [2024-11-04 02:31:55.029943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.999 [2024-11-04 02:31:55.029950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:07.999 [2024-11-04 02:31:55.029957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:07.999 [2024-11-04 02:31:55.029964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.999 [2024-11-04 02:31:55.029971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:07.999 [2024-11-04 02:31:55.029979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:07.999 [2024-11-04 02:31:55.029986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.999 [2024-11-04 02:31:55.029993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:07.999 [2024-11-04 02:31:55.030000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:07.999 [2024-11-04 02:31:55.030007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.999 [2024-11-04 02:31:55.030021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:07.999 [2024-11-04 02:31:55.030029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:07.999 [2024-11-04 02:31:55.030036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.999 [2024-11-04 02:31:55.030043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:07.999 [2024-11-04 02:31:55.030050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:07.999 [2024-11-04 02:31:55.030056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.999 [2024-11-04 02:31:55.030063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:07.999 [2024-11-04 02:31:55.030070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:07.999 [2024-11-04 02:31:55.030076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.999 [2024-11-04 02:31:55.030083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:07.999 [2024-11-04 02:31:55.030090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:07.999 [2024-11-04 02:31:55.030096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.999 [2024-11-04 02:31:55.030103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:07.999 [2024-11-04 02:31:55.030110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:07.999 [2024-11-04 02:31:55.030116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.999 [2024-11-04 02:31:55.030122] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:07.999 [2024-11-04 02:31:55.030130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:07.999 [2024-11-04 02:31:55.030137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.999 [2024-11-04 02:31:55.030146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.999 [2024-11-04 02:31:55.030153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:07.999 [2024-11-04 02:31:55.030160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:07.999 [2024-11-04 02:31:55.030167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:07.999 [2024-11-04 02:31:55.030173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:07.999 [2024-11-04 02:31:55.030181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:07.999 [2024-11-04 02:31:55.030187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:07.999 [2024-11-04 02:31:55.030195] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:07.999 [2024-11-04 02:31:55.030205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:08.000 [2024-11-04 02:31:55.030214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:08.000 [2024-11-04 02:31:55.030221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:08.000 [2024-11-04 02:31:55.030229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:08.000 [2024-11-04 02:31:55.030236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:08.000 [2024-11-04 02:31:55.030244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:08.000 [2024-11-04 02:31:55.030251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:08.000 [2024-11-04 02:31:55.030258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:08.000 [2024-11-04 02:31:55.030265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:08.000 [2024-11-04 02:31:55.030272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:08.000 [2024-11-04 02:31:55.030279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:08.000 [2024-11-04 02:31:55.030286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:08.000 [2024-11-04 02:31:55.030293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:08.000 [2024-11-04 02:31:55.030300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:08.000 [2024-11-04 02:31:55.030308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:08.000 [2024-11-04 02:31:55.030315] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:08.000 [2024-11-04 02:31:55.030324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:08.000 [2024-11-04 02:31:55.030335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:08.000 [2024-11-04 02:31:55.030342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:08.000 [2024-11-04 02:31:55.030349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:08.000 [2024-11-04 02:31:55.030356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:08.000 [2024-11-04 02:31:55.030364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.000 [2024-11-04 02:31:55.030371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:08.000 [2024-11-04 02:31:55.030379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:21:08.000 [2024-11-04 02:31:55.030388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.000 [2024-11-04 02:31:55.061638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.000 [2024-11-04 02:31:55.061689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:08.000 [2024-11-04 02:31:55.061701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.205 ms 00:21:08.000 [2024-11-04 02:31:55.061710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.000 [2024-11-04 02:31:55.061799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.000 [2024-11-04 02:31:55.061819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:08.000 [2024-11-04 02:31:55.061828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:08.000 [2024-11-04 02:31:55.061836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.110719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.262 [2024-11-04 02:31:55.110941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:08.262 [2024-11-04 02:31:55.110964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.804 ms 00:21:08.262 [2024-11-04 02:31:55.110974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.111023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.262 [2024-11-04 02:31:55.111033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:08.262 [2024-11-04 02:31:55.111043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:08.262 [2024-11-04 02:31:55.111056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.111607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.262 [2024-11-04 02:31:55.111641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:08.262 [2024-11-04 02:31:55.111652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:21:08.262 [2024-11-04 02:31:55.111660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.111831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.262 [2024-11-04 02:31:55.111846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:08.262 [2024-11-04 02:31:55.111856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:21:08.262 [2024-11-04 02:31:55.111882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.127309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.262 [2024-11-04 02:31:55.127349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:08.262 [2024-11-04 02:31:55.127360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.399 ms 00:21:08.262 [2024-11-04 02:31:55.127370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.141252] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:08.262 [2024-11-04 02:31:55.141297] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:08.262 [2024-11-04 02:31:55.141310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.262 [2024-11-04 02:31:55.141318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:08.262 [2024-11-04 02:31:55.141329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.833 ms 00:21:08.262 [2024-11-04 02:31:55.141336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.166910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.262 [2024-11-04 02:31:55.166974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:08.262 [2024-11-04 02:31:55.166986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.521 ms 00:21:08.262 [2024-11-04 02:31:55.166995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.179622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.262 [2024-11-04 02:31:55.179662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:08.262 [2024-11-04 02:31:55.179674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.575 ms 00:21:08.262 [2024-11-04 02:31:55.179681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.191911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.262 [2024-11-04 02:31:55.191952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:08.262 [2024-11-04 02:31:55.191964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.169 ms 00:21:08.262 [2024-11-04 02:31:55.191972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.192614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.262 [2024-11-04 02:31:55.192646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:08.262 [2024-11-04 02:31:55.192657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:21:08.262 [2024-11-04 02:31:55.192665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.256442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.262 [2024-11-04 02:31:55.256501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:08.262 [2024-11-04 02:31:55.256518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.752 ms 00:21:08.262 [2024-11-04 02:31:55.256534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.262 [2024-11-04 02:31:55.267720] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:08.262 [2024-11-04 02:31:55.270777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.263 [2024-11-04 02:31:55.270819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:08.263 [2024-11-04 02:31:55.270831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.188 ms 00:21:08.263 [2024-11-04 02:31:55.270839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.263 [2024-11-04 02:31:55.270937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.263 [2024-11-04 02:31:55.270950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:08.263 [2024-11-04 02:31:55.270961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:08.263 [2024-11-04 02:31:55.270969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.263 [2024-11-04 02:31:55.271043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.263 [2024-11-04 02:31:55.271054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:08.263 [2024-11-04 02:31:55.271063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:08.263 [2024-11-04 02:31:55.271071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.263 [2024-11-04 02:31:55.271093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.263 [2024-11-04 02:31:55.271103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:08.263 [2024-11-04 02:31:55.271111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:08.263 [2024-11-04 02:31:55.271119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.263 [2024-11-04 02:31:55.271154] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:08.263 [2024-11-04 02:31:55.271166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.263 [2024-11-04 02:31:55.271175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:08.263 [2024-11-04 02:31:55.271183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:08.263 [2024-11-04 02:31:55.271191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.263 [2024-11-04 02:31:55.296661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.263 [2024-11-04 02:31:55.296831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:08.263 [2024-11-04 02:31:55.296914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.451 ms 00:21:08.263 [2024-11-04 02:31:55.296940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.263 [2024-11-04 02:31:55.297036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.263 [2024-11-04 02:31:55.297065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:08.263 [2024-11-04 02:31:55.297088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:08.263 [2024-11-04 02:31:55.297107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.263 [2024-11-04 02:31:55.298818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.395 ms, result 0 00:21:09.207  [2024-11-04T02:31:57.705Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-04T02:31:58.649Z] Copying: 29/1024 [MB] (12 MBps) [2024-11-04T02:31:59.591Z] Copying: 40/1024 [MB] (11 MBps) [2024-11-04T02:32:00.532Z] Copying: 57/1024 [MB] (17 MBps) [2024-11-04T02:32:01.489Z] Copying: 79/1024 [MB] (21 MBps) [2024-11-04T02:32:02.432Z] Copying: 98/1024 [MB] (19 MBps) [2024-11-04T02:32:03.377Z] Copying: 117/1024 [MB] (18 MBps) [2024-11-04T02:32:04.359Z] Copying: 131/1024 [MB] (13 MBps) [2024-11-04T02:32:05.731Z] Copying: 145/1024 [MB] (13 MBps) [2024-11-04T02:32:06.664Z] Copying: 175/1024 [MB] (30 MBps) [2024-11-04T02:32:07.596Z] Copying: 209/1024 [MB] (33 MBps) [2024-11-04T02:32:08.539Z] Copying: 240/1024 [MB] (31 MBps) [2024-11-04T02:32:09.479Z] Copying: 256/1024 [MB] (16 MBps) [2024-11-04T02:32:10.428Z] Copying: 273/1024 [MB] (16 MBps) [2024-11-04T02:32:11.377Z] Copying: 286/1024 [MB] (13 MBps) [2024-11-04T02:32:12.318Z] Copying: 301/1024 [MB] (15 MBps) [2024-11-04T02:32:13.705Z] Copying: 319/1024 [MB] (18 MBps) [2024-11-04T02:32:14.649Z] Copying: 337/1024 [MB] (17 MBps) [2024-11-04T02:32:15.594Z] Copying: 354/1024 [MB] (17 MBps) [2024-11-04T02:32:16.537Z] Copying: 373/1024 [MB] (19 MBps) [2024-11-04T02:32:17.481Z] Copying: 394/1024 [MB] (20 MBps) [2024-11-04T02:32:18.425Z] Copying: 416/1024 [MB] (22 MBps) [2024-11-04T02:32:19.369Z] Copying: 435/1024 [MB] (18 MBps) [2024-11-04T02:32:20.312Z] Copying: 451/1024 [MB] (15 MBps) [2024-11-04T02:32:21.695Z] Copying: 468/1024 [MB] (17 MBps) [2024-11-04T02:32:22.638Z] Copying: 490/1024 [MB] (21 MBps) [2024-11-04T02:32:23.583Z] Copying: 511/1024 [MB] (21 MBps) [2024-11-04T02:32:24.588Z] Copying: 524/1024 [MB] (12 MBps) [2024-11-04T02:32:25.533Z] Copying: 534/1024 [MB] (10 MBps) [2024-11-04T02:32:26.477Z] Copying: 544/1024 [MB] (10 MBps) [2024-11-04T02:32:27.425Z] Copying: 555/1024 [MB] (10 MBps) [2024-11-04T02:32:28.371Z] Copying: 567/1024 [MB] (12 MBps) [2024-11-04T02:32:29.312Z] Copying: 586/1024 [MB] (18 MBps) [2024-11-04T02:32:30.697Z] Copying: 601/1024 [MB] (14 MBps) [2024-11-04T02:32:31.644Z] Copying: 618/1024 [MB] (16 MBps) [2024-11-04T02:32:32.587Z] Copying: 628/1024 [MB] (10 MBps) [2024-11-04T02:32:33.540Z] Copying: 642/1024 [MB] (14 MBps) [2024-11-04T02:32:34.484Z] Copying: 662/1024 [MB] (19 MBps) [2024-11-04T02:32:35.425Z] Copying: 678/1024 [MB] (15 MBps) [2024-11-04T02:32:36.365Z] Copying: 699/1024 [MB] (21 MBps) [2024-11-04T02:32:37.749Z] Copying: 711/1024 [MB] (12 MBps) [2024-11-04T02:32:38.320Z] Copying: 735/1024 [MB] (23 MBps) [2024-11-04T02:32:39.698Z] Copying: 755/1024 [MB] (19 MBps) [2024-11-04T02:32:40.639Z] Copying: 797/1024 [MB] (42 MBps) [2024-11-04T02:32:41.581Z] Copying: 818/1024 [MB] (20 MBps) [2024-11-04T02:32:42.523Z] Copying: 829/1024 [MB] (11 MBps) [2024-11-04T02:32:43.465Z] Copying: 857/1024 [MB] (27 MBps) [2024-11-04T02:32:44.444Z] Copying: 870/1024 [MB] (12 MBps) [2024-11-04T02:32:45.386Z] Copying: 902/1024 [MB] (32 MBps) [2024-11-04T02:32:46.321Z] Copying: 918/1024 [MB] (15 MBps) [2024-11-04T02:32:47.706Z] Copying: 954/1024 [MB] (36 MBps) [2024-11-04T02:32:48.650Z] Copying: 964/1024 [MB] (10 MBps) [2024-11-04T02:32:49.592Z] Copying: 990/1024 [MB] (25 MBps) [2024-11-04T02:32:50.537Z] Copying: 1004/1024 [MB] (14 MBps) [2024-11-04T02:32:51.476Z] Copying: 1022/1024 [MB] (17 MBps) [2024-11-04T02:32:51.476Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-04 02:32:51.201528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.201577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:04.365 [2024-11-04 02:32:51.201589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:04.365 [2024-11-04 02:32:51.201596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.203312] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:04.365 [2024-11-04 02:32:51.206704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.206732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:04.365 [2024-11-04 02:32:51.206741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.361 ms 00:22:04.365 [2024-11-04 02:32:51.206749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.216599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.216712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:04.365 [2024-11-04 02:32:51.216725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.777 ms 00:22:04.365 [2024-11-04 02:32:51.216732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.231912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.231938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:04.365 [2024-11-04 02:32:51.231947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.161 ms 00:22:04.365 [2024-11-04 02:32:51.231953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.236714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.236736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:04.365 [2024-11-04 02:32:51.236744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.741 ms 00:22:04.365 [2024-11-04 02:32:51.236751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.254788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.254815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:04.365 [2024-11-04 02:32:51.254823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.004 ms 00:22:04.365 [2024-11-04 02:32:51.254829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.266163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.266293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:04.365 [2024-11-04 02:32:51.266310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.309 ms 00:22:04.365 [2024-11-04 02:32:51.266315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.309978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.310004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:04.365 [2024-11-04 02:32:51.310013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.636 ms 00:22:04.365 [2024-11-04 02:32:51.310019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.327413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.327436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:04.365 [2024-11-04 02:32:51.327444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.382 ms 00:22:04.365 [2024-11-04 02:32:51.327450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.344715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.344745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:04.365 [2024-11-04 02:32:51.344752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.239 ms 00:22:04.365 [2024-11-04 02:32:51.344758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.361631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.361655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:04.365 [2024-11-04 02:32:51.361662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.849 ms 00:22:04.365 [2024-11-04 02:32:51.361668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.378380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.365 [2024-11-04 02:32:51.378403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:04.365 [2024-11-04 02:32:51.378410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.671 ms 00:22:04.365 [2024-11-04 02:32:51.378415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.365 [2024-11-04 02:32:51.378440] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:04.365 [2024-11-04 02:32:51.378450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105216 / 261120 wr_cnt: 1 state: open 00:22:04.365 [2024-11-04 02:32:51.378458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:04.365 [2024-11-04 02:32:51.378607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.378994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.379000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.379006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.379011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.379017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.379022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.379027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.379033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:04.366 [2024-11-04 02:32:51.379045] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:04.366 [2024-11-04 02:32:51.379058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eb37cb69-a1b4-437f-9116-fc28bc24e968 00:22:04.366 [2024-11-04 02:32:51.379064] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105216 00:22:04.366 [2024-11-04 02:32:51.379069] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106176 00:22:04.366 [2024-11-04 02:32:51.379074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105216 00:22:04.366 [2024-11-04 02:32:51.379080] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:22:04.366 [2024-11-04 02:32:51.379085] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:04.366 [2024-11-04 02:32:51.379091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:04.366 [2024-11-04 02:32:51.379104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:04.366 [2024-11-04 02:32:51.379109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:04.366 [2024-11-04 02:32:51.379114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:04.366 [2024-11-04 02:32:51.379119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.366 [2024-11-04 02:32:51.379125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:04.366 [2024-11-04 02:32:51.379131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:22:04.366 [2024-11-04 02:32:51.379137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.366 [2024-11-04 02:32:51.388634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.366 [2024-11-04 02:32:51.388729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:04.366 [2024-11-04 02:32:51.388740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.486 ms 00:22:04.366 [2024-11-04 02:32:51.388745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.366 [2024-11-04 02:32:51.389027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.367 [2024-11-04 02:32:51.389035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:04.367 [2024-11-04 02:32:51.389042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:22:04.367 [2024-11-04 02:32:51.389047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.367 [2024-11-04 02:32:51.414693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.367 [2024-11-04 02:32:51.414720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:04.367 [2024-11-04 02:32:51.414729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.367 [2024-11-04 02:32:51.414735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.367 [2024-11-04 02:32:51.414773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.367 [2024-11-04 02:32:51.414779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:04.367 [2024-11-04 02:32:51.414785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.367 [2024-11-04 02:32:51.414790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.367 [2024-11-04 02:32:51.414833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.367 [2024-11-04 02:32:51.414841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:04.367 [2024-11-04 02:32:51.414847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.367 [2024-11-04 02:32:51.414855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.367 [2024-11-04 02:32:51.414880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.367 [2024-11-04 02:32:51.414887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:04.367 [2024-11-04 02:32:51.414893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.367 [2024-11-04 02:32:51.414898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.367 [2024-11-04 02:32:51.473217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.367 [2024-11-04 02:32:51.473250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:04.367 [2024-11-04 02:32:51.473258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.367 [2024-11-04 02:32:51.473267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.627 [2024-11-04 02:32:51.520964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.627 [2024-11-04 02:32:51.520994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:04.627 [2024-11-04 02:32:51.521003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.627 [2024-11-04 02:32:51.521009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.627 [2024-11-04 02:32:51.521056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.627 [2024-11-04 02:32:51.521063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:04.627 [2024-11-04 02:32:51.521069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.627 [2024-11-04 02:32:51.521075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.627 [2024-11-04 02:32:51.521105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.627 [2024-11-04 02:32:51.521112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:04.627 [2024-11-04 02:32:51.521118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.627 [2024-11-04 02:32:51.521124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.627 [2024-11-04 02:32:51.521191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.627 [2024-11-04 02:32:51.521199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:04.627 [2024-11-04 02:32:51.521205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.627 [2024-11-04 02:32:51.521210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.627 [2024-11-04 02:32:51.521231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.627 [2024-11-04 02:32:51.521240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:04.627 [2024-11-04 02:32:51.521246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.627 [2024-11-04 02:32:51.521252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.627 [2024-11-04 02:32:51.521279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.627 [2024-11-04 02:32:51.521286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:04.627 [2024-11-04 02:32:51.521292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.627 [2024-11-04 02:32:51.521297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.627 [2024-11-04 02:32:51.521332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.627 [2024-11-04 02:32:51.521339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:04.627 [2024-11-04 02:32:51.521345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.627 [2024-11-04 02:32:51.521351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.627 [2024-11-04 02:32:51.521439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 322.242 ms, result 0 00:22:05.563 00:22:05.563 00:22:05.563 02:32:52 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:05.563 [2024-11-04 02:32:52.540990] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:22:05.563 [2024-11-04 02:32:52.541113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76767 ] 00:22:05.822 [2024-11-04 02:32:52.697509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.822 [2024-11-04 02:32:52.779396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.081 [2024-11-04 02:32:52.984273] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.081 [2024-11-04 02:32:52.984418] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:06.081 [2024-11-04 02:32:53.135286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.081 [2024-11-04 02:32:53.135412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:06.081 [2024-11-04 02:32:53.135432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:06.081 [2024-11-04 02:32:53.135438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.081 [2024-11-04 02:32:53.135477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.081 [2024-11-04 02:32:53.135484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.081 [2024-11-04 02:32:53.135492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:06.081 [2024-11-04 02:32:53.135498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.081 [2024-11-04 02:32:53.135512] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:06.081 [2024-11-04 02:32:53.136059] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:06.081 [2024-11-04 02:32:53.136074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.081 [2024-11-04 02:32:53.136080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.081 [2024-11-04 02:32:53.136087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:22:06.081 [2024-11-04 02:32:53.136092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.081 [2024-11-04 02:32:53.137066] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:06.081 [2024-11-04 02:32:53.146545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.081 [2024-11-04 02:32:53.146655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:06.081 [2024-11-04 02:32:53.146669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.481 ms 00:22:06.081 [2024-11-04 02:32:53.146676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.081 [2024-11-04 02:32:53.146713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.081 [2024-11-04 02:32:53.146722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:06.081 [2024-11-04 02:32:53.146728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:06.081 [2024-11-04 02:32:53.146734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.081 [2024-11-04 02:32:53.151104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.081 [2024-11-04 02:32:53.151129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.081 [2024-11-04 02:32:53.151136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.325 ms 00:22:06.081 [2024-11-04 02:32:53.151142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.081 [2024-11-04 02:32:53.151200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.081 [2024-11-04 02:32:53.151206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.081 [2024-11-04 02:32:53.151212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:06.081 [2024-11-04 02:32:53.151219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.081 [2024-11-04 02:32:53.151250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.081 [2024-11-04 02:32:53.151258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:06.081 [2024-11-04 02:32:53.151264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:06.081 [2024-11-04 02:32:53.151269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.081 [2024-11-04 02:32:53.151283] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:06.081 [2024-11-04 02:32:53.153959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.081 [2024-11-04 02:32:53.153980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.081 [2024-11-04 02:32:53.153988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.680 ms 00:22:06.081 [2024-11-04 02:32:53.153995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.081 [2024-11-04 02:32:53.154020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.082 [2024-11-04 02:32:53.154027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:06.082 [2024-11-04 02:32:53.154033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:06.082 [2024-11-04 02:32:53.154039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.082 [2024-11-04 02:32:53.154052] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:06.082 [2024-11-04 02:32:53.154066] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:06.082 [2024-11-04 02:32:53.154093] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:06.082 [2024-11-04 02:32:53.154106] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:06.082 [2024-11-04 02:32:53.154185] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:06.082 [2024-11-04 02:32:53.154193] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:06.082 [2024-11-04 02:32:53.154201] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:06.082 [2024-11-04 02:32:53.154209] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:06.082 [2024-11-04 02:32:53.154216] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:06.082 [2024-11-04 02:32:53.154222] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:06.082 [2024-11-04 02:32:53.154228] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:06.082 [2024-11-04 02:32:53.154234] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:06.082 [2024-11-04 02:32:53.154239] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:06.082 [2024-11-04 02:32:53.154247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.082 [2024-11-04 02:32:53.154252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:06.082 [2024-11-04 02:32:53.154258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:22:06.082 [2024-11-04 02:32:53.154263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.082 [2024-11-04 02:32:53.154326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.082 [2024-11-04 02:32:53.154332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:06.082 [2024-11-04 02:32:53.154338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:06.082 [2024-11-04 02:32:53.154343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.082 [2024-11-04 02:32:53.154418] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:06.082 [2024-11-04 02:32:53.154427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:06.082 [2024-11-04 02:32:53.154434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.082 [2024-11-04 02:32:53.154439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:06.082 [2024-11-04 02:32:53.154450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:06.082 [2024-11-04 02:32:53.154461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:06.082 [2024-11-04 02:32:53.154466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.082 [2024-11-04 02:32:53.154477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:06.082 [2024-11-04 02:32:53.154482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:06.082 [2024-11-04 02:32:53.154486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.082 [2024-11-04 02:32:53.154491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:06.082 [2024-11-04 02:32:53.154498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:06.082 [2024-11-04 02:32:53.154506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:06.082 [2024-11-04 02:32:53.154516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:06.082 [2024-11-04 02:32:53.154521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:06.082 [2024-11-04 02:32:53.154531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.082 [2024-11-04 02:32:53.154541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:06.082 [2024-11-04 02:32:53.154546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.082 [2024-11-04 02:32:53.154555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:06.082 [2024-11-04 02:32:53.154560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.082 [2024-11-04 02:32:53.154570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:06.082 [2024-11-04 02:32:53.154574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.082 [2024-11-04 02:32:53.154584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:06.082 [2024-11-04 02:32:53.154589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.082 [2024-11-04 02:32:53.154598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:06.082 [2024-11-04 02:32:53.154603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:06.082 [2024-11-04 02:32:53.154607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.082 [2024-11-04 02:32:53.154612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:06.082 [2024-11-04 02:32:53.154617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:06.082 [2024-11-04 02:32:53.154622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:06.082 [2024-11-04 02:32:53.154632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:06.082 [2024-11-04 02:32:53.154637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154642] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:06.082 [2024-11-04 02:32:53.154648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:06.082 [2024-11-04 02:32:53.154653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.082 [2024-11-04 02:32:53.154659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.082 [2024-11-04 02:32:53.154665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:06.082 [2024-11-04 02:32:53.154670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:06.082 [2024-11-04 02:32:53.154675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:06.082 [2024-11-04 02:32:53.154680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:06.082 [2024-11-04 02:32:53.154685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:06.082 [2024-11-04 02:32:53.154690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:06.082 [2024-11-04 02:32:53.154696] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:06.082 [2024-11-04 02:32:53.154703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.082 [2024-11-04 02:32:53.154709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:06.082 [2024-11-04 02:32:53.154715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:06.082 [2024-11-04 02:32:53.154720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:06.082 [2024-11-04 02:32:53.154726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:06.082 [2024-11-04 02:32:53.154731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:06.082 [2024-11-04 02:32:53.154736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:06.082 [2024-11-04 02:32:53.154742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:06.082 [2024-11-04 02:32:53.154747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:06.082 [2024-11-04 02:32:53.154752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:06.082 [2024-11-04 02:32:53.154757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:06.082 [2024-11-04 02:32:53.154762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:06.082 [2024-11-04 02:32:53.154768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:06.082 [2024-11-04 02:32:53.154773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:06.082 [2024-11-04 02:32:53.154778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:06.082 [2024-11-04 02:32:53.154783] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:06.082 [2024-11-04 02:32:53.154789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.082 [2024-11-04 02:32:53.154798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:06.082 [2024-11-04 02:32:53.154803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:06.082 [2024-11-04 02:32:53.154809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:06.083 [2024-11-04 02:32:53.154815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:06.083 [2024-11-04 02:32:53.154820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.083 [2024-11-04 02:32:53.154826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:06.083 [2024-11-04 02:32:53.154832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:22:06.083 [2024-11-04 02:32:53.154838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.083 [2024-11-04 02:32:53.175743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.083 [2024-11-04 02:32:53.175769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.083 [2024-11-04 02:32:53.175777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.860 ms 00:22:06.083 [2024-11-04 02:32:53.175783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.083 [2024-11-04 02:32:53.175847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.083 [2024-11-04 02:32:53.175856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:06.083 [2024-11-04 02:32:53.175862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:06.083 [2024-11-04 02:32:53.175884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.213730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.213761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.341 [2024-11-04 02:32:53.213771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.807 ms 00:22:06.341 [2024-11-04 02:32:53.213777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.213808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.213816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.341 [2024-11-04 02:32:53.213822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:06.341 [2024-11-04 02:32:53.213831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.214151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.214164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.341 [2024-11-04 02:32:53.214171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:22:06.341 [2024-11-04 02:32:53.214177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.214273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.214280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.341 [2024-11-04 02:32:53.214287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:06.341 [2024-11-04 02:32:53.214292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.224708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.224734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.341 [2024-11-04 02:32:53.224742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.398 ms 00:22:06.341 [2024-11-04 02:32:53.224747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.234521] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:06.341 [2024-11-04 02:32:53.234548] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:06.341 [2024-11-04 02:32:53.234558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.234564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:06.341 [2024-11-04 02:32:53.234571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.738 ms 00:22:06.341 [2024-11-04 02:32:53.234576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.253116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.253147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:06.341 [2024-11-04 02:32:53.253156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.509 ms 00:22:06.341 [2024-11-04 02:32:53.253162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.261949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.261979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:06.341 [2024-11-04 02:32:53.261986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.769 ms 00:22:06.341 [2024-11-04 02:32:53.261992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.270472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.270496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:06.341 [2024-11-04 02:32:53.270503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.455 ms 00:22:06.341 [2024-11-04 02:32:53.270509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.270968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.271017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:06.341 [2024-11-04 02:32:53.271026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:22:06.341 [2024-11-04 02:32:53.271033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.314219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.314362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:06.341 [2024-11-04 02:32:53.314377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.171 ms 00:22:06.341 [2024-11-04 02:32:53.314387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.322040] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:06.341 [2024-11-04 02:32:53.323673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.323703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:06.341 [2024-11-04 02:32:53.323711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.260 ms 00:22:06.341 [2024-11-04 02:32:53.323717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.323768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.323776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:06.341 [2024-11-04 02:32:53.323783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:06.341 [2024-11-04 02:32:53.323790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.324812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.324839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:06.341 [2024-11-04 02:32:53.324846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:22:06.341 [2024-11-04 02:32:53.324852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.324880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.324888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:06.341 [2024-11-04 02:32:53.324894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:06.341 [2024-11-04 02:32:53.324900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.324948] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:06.341 [2024-11-04 02:32:53.324957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.324963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:06.341 [2024-11-04 02:32:53.324970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:06.341 [2024-11-04 02:32:53.324975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.342577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.342602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:06.341 [2024-11-04 02:32:53.342611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.588 ms 00:22:06.341 [2024-11-04 02:32:53.342620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.342673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.341 [2024-11-04 02:32:53.342681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:06.341 [2024-11-04 02:32:53.342687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:06.341 [2024-11-04 02:32:53.342693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.341 [2024-11-04 02:32:53.343427] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 207.799 ms, result 0 00:22:07.724  [2024-11-04T02:32:55.780Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-04T02:32:56.725Z] Copying: 29/1024 [MB] (13 MBps) [2024-11-04T02:32:57.666Z] Copying: 42/1024 [MB] (12 MBps) [2024-11-04T02:32:58.608Z] Copying: 56/1024 [MB] (14 MBps) [2024-11-04T02:32:59.548Z] Copying: 76/1024 [MB] (19 MBps) [2024-11-04T02:33:00.488Z] Copying: 87/1024 [MB] (10 MBps) [2024-11-04T02:33:01.873Z] Copying: 103/1024 [MB] (15 MBps) [2024-11-04T02:33:02.814Z] Copying: 116/1024 [MB] (13 MBps) [2024-11-04T02:33:03.763Z] Copying: 137/1024 [MB] (20 MBps) [2024-11-04T02:33:04.743Z] Copying: 161/1024 [MB] (24 MBps) [2024-11-04T02:33:05.689Z] Copying: 176/1024 [MB] (14 MBps) [2024-11-04T02:33:06.636Z] Copying: 198/1024 [MB] (22 MBps) [2024-11-04T02:33:07.579Z] Copying: 214/1024 [MB] (15 MBps) [2024-11-04T02:33:08.522Z] Copying: 226/1024 [MB] (12 MBps) [2024-11-04T02:33:09.910Z] Copying: 243/1024 [MB] (16 MBps) [2024-11-04T02:33:10.855Z] Copying: 254/1024 [MB] (10 MBps) [2024-11-04T02:33:11.796Z] Copying: 266/1024 [MB] (12 MBps) [2024-11-04T02:33:12.741Z] Copying: 288/1024 [MB] (22 MBps) [2024-11-04T02:33:13.682Z] Copying: 302/1024 [MB] (13 MBps) [2024-11-04T02:33:14.626Z] Copying: 320/1024 [MB] (18 MBps) [2024-11-04T02:33:15.574Z] Copying: 338/1024 [MB] (17 MBps) [2024-11-04T02:33:16.519Z] Copying: 352/1024 [MB] (13 MBps) [2024-11-04T02:33:17.906Z] Copying: 362/1024 [MB] (10 MBps) [2024-11-04T02:33:18.848Z] Copying: 386/1024 [MB] (23 MBps) [2024-11-04T02:33:19.791Z] Copying: 404/1024 [MB] (17 MBps) [2024-11-04T02:33:20.729Z] Copying: 415/1024 [MB] (11 MBps) [2024-11-04T02:33:21.672Z] Copying: 434/1024 [MB] (18 MBps) [2024-11-04T02:33:22.613Z] Copying: 451/1024 [MB] (16 MBps) [2024-11-04T02:33:23.553Z] Copying: 473/1024 [MB] (22 MBps) [2024-11-04T02:33:24.532Z] Copying: 498/1024 [MB] (24 MBps) [2024-11-04T02:33:25.919Z] Copying: 515/1024 [MB] (17 MBps) [2024-11-04T02:33:26.492Z] Copying: 539/1024 [MB] (23 MBps) [2024-11-04T02:33:27.879Z] Copying: 562/1024 [MB] (22 MBps) [2024-11-04T02:33:28.822Z] Copying: 586/1024 [MB] (24 MBps) [2024-11-04T02:33:29.766Z] Copying: 611/1024 [MB] (25 MBps) [2024-11-04T02:33:30.708Z] Copying: 633/1024 [MB] (21 MBps) [2024-11-04T02:33:31.657Z] Copying: 654/1024 [MB] (21 MBps) [2024-11-04T02:33:32.604Z] Copying: 676/1024 [MB] (22 MBps) [2024-11-04T02:33:33.551Z] Copying: 694/1024 [MB] (17 MBps) [2024-11-04T02:33:34.497Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-04T02:33:35.886Z] Copying: 715/1024 [MB] (10 MBps) [2024-11-04T02:33:36.832Z] Copying: 725/1024 [MB] (10 MBps) [2024-11-04T02:33:37.778Z] Copying: 736/1024 [MB] (10 MBps) [2024-11-04T02:33:38.724Z] Copying: 746/1024 [MB] (10 MBps) [2024-11-04T02:33:39.669Z] Copying: 756/1024 [MB] (10 MBps) [2024-11-04T02:33:40.614Z] Copying: 767/1024 [MB] (10 MBps) [2024-11-04T02:33:41.560Z] Copying: 777/1024 [MB] (10 MBps) [2024-11-04T02:33:42.506Z] Copying: 788/1024 [MB] (10 MBps) [2024-11-04T02:33:43.896Z] Copying: 798/1024 [MB] (10 MBps) [2024-11-04T02:33:44.508Z] Copying: 809/1024 [MB] (10 MBps) [2024-11-04T02:33:45.898Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-04T02:33:46.844Z] Copying: 830/1024 [MB] (10 MBps) [2024-11-04T02:33:47.790Z] Copying: 840/1024 [MB] (10 MBps) [2024-11-04T02:33:48.736Z] Copying: 850/1024 [MB] (10 MBps) [2024-11-04T02:33:49.680Z] Copying: 861/1024 [MB] (10 MBps) [2024-11-04T02:33:50.622Z] Copying: 873/1024 [MB] (12 MBps) [2024-11-04T02:33:51.568Z] Copying: 884/1024 [MB] (10 MBps) [2024-11-04T02:33:52.512Z] Copying: 902/1024 [MB] (17 MBps) [2024-11-04T02:33:53.902Z] Copying: 924/1024 [MB] (22 MBps) [2024-11-04T02:33:54.845Z] Copying: 940/1024 [MB] (16 MBps) [2024-11-04T02:33:55.787Z] Copying: 955/1024 [MB] (14 MBps) [2024-11-04T02:33:56.732Z] Copying: 968/1024 [MB] (13 MBps) [2024-11-04T02:33:57.673Z] Copying: 990/1024 [MB] (22 MBps) [2024-11-04T02:33:58.616Z] Copying: 1006/1024 [MB] (15 MBps) [2024-11-04T02:33:58.878Z] Copying: 1021/1024 [MB] (14 MBps) [2024-11-04T02:33:58.879Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-04 02:33:58.809231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.768 [2024-11-04 02:33:58.809325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:11.768 [2024-11-04 02:33:58.809344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:11.768 [2024-11-04 02:33:58.809361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.768 [2024-11-04 02:33:58.809390] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:11.768 [2024-11-04 02:33:58.813620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.768 [2024-11-04 02:33:58.813660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:11.768 [2024-11-04 02:33:58.813674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.210 ms 00:23:11.768 [2024-11-04 02:33:58.813685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.768 [2024-11-04 02:33:58.813984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.768 [2024-11-04 02:33:58.814019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:11.768 [2024-11-04 02:33:58.814030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:23:11.768 [2024-11-04 02:33:58.814040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.768 [2024-11-04 02:33:58.820157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.768 [2024-11-04 02:33:58.820334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:11.768 [2024-11-04 02:33:58.820358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.092 ms 00:23:11.768 [2024-11-04 02:33:58.820367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.768 [2024-11-04 02:33:58.826752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.768 [2024-11-04 02:33:58.826789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:11.768 [2024-11-04 02:33:58.826800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.258 ms 00:23:11.768 [2024-11-04 02:33:58.826809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.768 [2024-11-04 02:33:58.853879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.768 [2024-11-04 02:33:58.854043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:11.768 [2024-11-04 02:33:58.854113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.001 ms 00:23:11.768 [2024-11-04 02:33:58.854138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.768 [2024-11-04 02:33:58.870358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.768 [2024-11-04 02:33:58.870526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:11.768 [2024-11-04 02:33:58.870603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.172 ms 00:23:11.768 [2024-11-04 02:33:58.870628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.342 [2024-11-04 02:33:59.207740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.342 [2024-11-04 02:33:59.207963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:12.342 [2024-11-04 02:33:59.208050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 337.052 ms 00:23:12.342 [2024-11-04 02:33:59.208081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.342 [2024-11-04 02:33:59.234509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.342 [2024-11-04 02:33:59.234686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:12.342 [2024-11-04 02:33:59.234904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.390 ms 00:23:12.342 [2024-11-04 02:33:59.234949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.342 [2024-11-04 02:33:59.260217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.342 [2024-11-04 02:33:59.260263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:12.342 [2024-11-04 02:33:59.260289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.121 ms 00:23:12.342 [2024-11-04 02:33:59.260297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.342 [2024-11-04 02:33:59.284885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.342 [2024-11-04 02:33:59.284933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:12.342 [2024-11-04 02:33:59.284945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.543 ms 00:23:12.342 [2024-11-04 02:33:59.284953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.342 [2024-11-04 02:33:59.309105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.342 [2024-11-04 02:33:59.309151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:12.342 [2024-11-04 02:33:59.309163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.079 ms 00:23:12.342 [2024-11-04 02:33:59.309171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.342 [2024-11-04 02:33:59.309216] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:12.342 [2024-11-04 02:33:59.309232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:12.342 [2024-11-04 02:33:59.309244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:12.342 [2024-11-04 02:33:59.309497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.309996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.310003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.310011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.310019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.310027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.310036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.310043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:12.343 [2024-11-04 02:33:59.310059] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:12.343 [2024-11-04 02:33:59.310068] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: eb37cb69-a1b4-437f-9116-fc28bc24e968 00:23:12.343 [2024-11-04 02:33:59.310076] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:12.343 [2024-11-04 02:33:59.310084] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 26816 00:23:12.343 [2024-11-04 02:33:59.310091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 25856 00:23:12.343 [2024-11-04 02:33:59.310100] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0371 00:23:12.343 [2024-11-04 02:33:59.310107] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:12.343 [2024-11-04 02:33:59.310121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:12.343 [2024-11-04 02:33:59.310129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:12.343 [2024-11-04 02:33:59.310142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:12.343 [2024-11-04 02:33:59.310148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:12.343 [2024-11-04 02:33:59.310156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.343 [2024-11-04 02:33:59.310164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:12.343 [2024-11-04 02:33:59.310174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:23:12.343 [2024-11-04 02:33:59.310183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.343 [2024-11-04 02:33:59.323801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.343 [2024-11-04 02:33:59.323997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:12.343 [2024-11-04 02:33:59.324017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.598 ms 00:23:12.343 [2024-11-04 02:33:59.324033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.344 [2024-11-04 02:33:59.324439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.344 [2024-11-04 02:33:59.324450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:12.344 [2024-11-04 02:33:59.324460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:23:12.344 [2024-11-04 02:33:59.324467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.344 [2024-11-04 02:33:59.360992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.344 [2024-11-04 02:33:59.361043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:12.344 [2024-11-04 02:33:59.361055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.344 [2024-11-04 02:33:59.361065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.344 [2024-11-04 02:33:59.361133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.344 [2024-11-04 02:33:59.361143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:12.344 [2024-11-04 02:33:59.361153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.344 [2024-11-04 02:33:59.361162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.344 [2024-11-04 02:33:59.361226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.344 [2024-11-04 02:33:59.361236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:12.344 [2024-11-04 02:33:59.361246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.344 [2024-11-04 02:33:59.361260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.344 [2024-11-04 02:33:59.361277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.344 [2024-11-04 02:33:59.361287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:12.344 [2024-11-04 02:33:59.361296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.344 [2024-11-04 02:33:59.361305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.344 [2024-11-04 02:33:59.444718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.344 [2024-11-04 02:33:59.444775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:12.344 [2024-11-04 02:33:59.444796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.344 [2024-11-04 02:33:59.444805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.606 [2024-11-04 02:33:59.514197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.606 [2024-11-04 02:33:59.514255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:12.606 [2024-11-04 02:33:59.514269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.606 [2024-11-04 02:33:59.514278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.606 [2024-11-04 02:33:59.514366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.606 [2024-11-04 02:33:59.514377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:12.606 [2024-11-04 02:33:59.514387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.606 [2024-11-04 02:33:59.514396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.606 [2024-11-04 02:33:59.514440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.606 [2024-11-04 02:33:59.514451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:12.606 [2024-11-04 02:33:59.514459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.606 [2024-11-04 02:33:59.514467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.606 [2024-11-04 02:33:59.514570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.606 [2024-11-04 02:33:59.514581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:12.606 [2024-11-04 02:33:59.514589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.606 [2024-11-04 02:33:59.514597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.606 [2024-11-04 02:33:59.514633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.606 [2024-11-04 02:33:59.514643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:12.606 [2024-11-04 02:33:59.514652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.606 [2024-11-04 02:33:59.514660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.606 [2024-11-04 02:33:59.514702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.606 [2024-11-04 02:33:59.514713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:12.606 [2024-11-04 02:33:59.514721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.606 [2024-11-04 02:33:59.514730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.606 [2024-11-04 02:33:59.514782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.606 [2024-11-04 02:33:59.514794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:12.606 [2024-11-04 02:33:59.514803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.607 [2024-11-04 02:33:59.514811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.607 [2024-11-04 02:33:59.514984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 705.719 ms, result 0 00:23:13.179 00:23:13.179 00:23:13.179 02:34:00 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:15.734 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:15.734 02:34:02 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:15.734 02:34:02 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:15.734 02:34:02 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:15.734 02:34:02 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:15.734 02:34:02 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:15.735 Process with pid 74525 is not found 00:23:15.735 02:34:02 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74525 00:23:15.735 02:34:02 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74525 ']' 00:23:15.735 02:34:02 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74525 00:23:15.735 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74525) - No such process 00:23:15.735 02:34:02 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 74525 is not found' 00:23:15.735 02:34:02 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:15.735 Remove shared memory files 00:23:15.735 02:34:02 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:15.735 02:34:02 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:15.735 02:34:02 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:15.735 02:34:02 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:15.735 02:34:02 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:15.735 02:34:02 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:15.735 ************************************ 00:23:15.735 END TEST ftl_restore 00:23:15.735 ************************************ 00:23:15.735 00:23:15.735 real 4m46.805s 00:23:15.735 user 4m34.758s 00:23:15.735 sys 0m11.490s 00:23:15.735 02:34:02 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:15.735 02:34:02 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:15.735 02:34:02 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:15.735 02:34:02 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:15.735 02:34:02 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.735 02:34:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:15.735 ************************************ 00:23:15.735 START TEST ftl_dirty_shutdown 00:23:15.735 ************************************ 00:23:15.735 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:15.735 * Looking for test storage... 00:23:15.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.735 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:15.735 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:23:15.735 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.997 --rc genhtml_branch_coverage=1 00:23:15.997 --rc genhtml_function_coverage=1 00:23:15.997 --rc genhtml_legend=1 00:23:15.997 --rc geninfo_all_blocks=1 00:23:15.997 --rc geninfo_unexecuted_blocks=1 00:23:15.997 00:23:15.997 ' 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.997 --rc genhtml_branch_coverage=1 00:23:15.997 --rc genhtml_function_coverage=1 00:23:15.997 --rc genhtml_legend=1 00:23:15.997 --rc geninfo_all_blocks=1 00:23:15.997 --rc geninfo_unexecuted_blocks=1 00:23:15.997 00:23:15.997 ' 00:23:15.997 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:15.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.997 --rc genhtml_branch_coverage=1 00:23:15.997 --rc genhtml_function_coverage=1 00:23:15.997 --rc genhtml_legend=1 00:23:15.998 --rc geninfo_all_blocks=1 00:23:15.998 --rc geninfo_unexecuted_blocks=1 00:23:15.998 00:23:15.998 ' 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.998 --rc genhtml_branch_coverage=1 00:23:15.998 --rc genhtml_function_coverage=1 00:23:15.998 --rc genhtml_legend=1 00:23:15.998 --rc geninfo_all_blocks=1 00:23:15.998 --rc geninfo_unexecuted_blocks=1 00:23:15.998 00:23:15.998 ' 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77559 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77559 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 77559 ']' 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.998 02:34:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:15.998 [2024-11-04 02:34:02.977792] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:23:15.998 [2024-11-04 02:34:02.978594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77559 ] 00:23:16.260 [2024-11-04 02:34:03.146199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.260 [2024-11-04 02:34:03.262963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.205 02:34:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:17.205 02:34:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:23:17.205 02:34:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:17.205 02:34:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:17.205 02:34:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:17.205 02:34:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:17.205 02:34:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:17.205 02:34:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:17.205 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:17.205 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:17.205 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:17.205 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:23:17.205 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:17.205 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:17.205 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:17.205 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:17.519 { 00:23:17.519 "name": "nvme0n1", 00:23:17.519 "aliases": [ 00:23:17.519 "b5407017-08dc-4c80-bd6b-9b3627456c68" 00:23:17.519 ], 00:23:17.519 "product_name": "NVMe disk", 00:23:17.519 "block_size": 4096, 00:23:17.519 "num_blocks": 1310720, 00:23:17.519 "uuid": "b5407017-08dc-4c80-bd6b-9b3627456c68", 00:23:17.519 "numa_id": -1, 00:23:17.519 "assigned_rate_limits": { 00:23:17.519 "rw_ios_per_sec": 0, 00:23:17.519 "rw_mbytes_per_sec": 0, 00:23:17.519 "r_mbytes_per_sec": 0, 00:23:17.519 "w_mbytes_per_sec": 0 00:23:17.519 }, 00:23:17.519 "claimed": true, 00:23:17.519 "claim_type": "read_many_write_one", 00:23:17.519 "zoned": false, 00:23:17.519 "supported_io_types": { 00:23:17.519 "read": true, 00:23:17.519 "write": true, 00:23:17.519 "unmap": true, 00:23:17.519 "flush": true, 00:23:17.519 "reset": true, 00:23:17.519 "nvme_admin": true, 00:23:17.519 "nvme_io": true, 00:23:17.519 "nvme_io_md": false, 00:23:17.519 "write_zeroes": true, 00:23:17.519 "zcopy": false, 00:23:17.519 "get_zone_info": false, 00:23:17.519 "zone_management": false, 00:23:17.519 "zone_append": false, 00:23:17.519 "compare": true, 00:23:17.519 "compare_and_write": false, 00:23:17.519 "abort": true, 00:23:17.519 "seek_hole": false, 00:23:17.519 "seek_data": false, 00:23:17.519 "copy": true, 00:23:17.519 "nvme_iov_md": false 00:23:17.519 }, 00:23:17.519 "driver_specific": { 00:23:17.519 "nvme": [ 00:23:17.519 { 00:23:17.519 "pci_address": "0000:00:11.0", 00:23:17.519 "trid": { 00:23:17.519 "trtype": "PCIe", 00:23:17.519 "traddr": "0000:00:11.0" 00:23:17.519 }, 00:23:17.519 "ctrlr_data": { 00:23:17.519 "cntlid": 0, 00:23:17.519 "vendor_id": "0x1b36", 00:23:17.519 "model_number": "QEMU NVMe Ctrl", 00:23:17.519 "serial_number": "12341", 00:23:17.519 "firmware_revision": "8.0.0", 00:23:17.519 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:17.519 "oacs": { 00:23:17.519 "security": 0, 00:23:17.519 "format": 1, 00:23:17.519 "firmware": 0, 00:23:17.519 "ns_manage": 1 00:23:17.519 }, 00:23:17.519 "multi_ctrlr": false, 00:23:17.519 "ana_reporting": false 00:23:17.519 }, 00:23:17.519 "vs": { 00:23:17.519 "nvme_version": "1.4" 00:23:17.519 }, 00:23:17.519 "ns_data": { 00:23:17.519 "id": 1, 00:23:17.519 "can_share": false 00:23:17.519 } 00:23:17.519 } 00:23:17.519 ], 00:23:17.519 "mp_policy": "active_passive" 00:23:17.519 } 00:23:17.519 } 00:23:17.519 ]' 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:17.519 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:17.808 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=ceeb90ab-717b-46d5-8b60-bbe967283d88 00:23:17.808 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:17.808 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ceeb90ab-717b-46d5-8b60-bbe967283d88 00:23:18.069 02:34:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:18.329 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=5a4888ba-06af-44a5-9a76-df5fd4ac74de 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5a4888ba-06af-44a5-9a76-df5fd4ac74de 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:18.330 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:18.589 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:18.589 { 00:23:18.589 "name": "b01c906a-6e54-4a42-b733-f51cbb625c4e", 00:23:18.589 "aliases": [ 00:23:18.589 "lvs/nvme0n1p0" 00:23:18.589 ], 00:23:18.589 "product_name": "Logical Volume", 00:23:18.589 "block_size": 4096, 00:23:18.589 "num_blocks": 26476544, 00:23:18.589 "uuid": "b01c906a-6e54-4a42-b733-f51cbb625c4e", 00:23:18.589 "assigned_rate_limits": { 00:23:18.589 "rw_ios_per_sec": 0, 00:23:18.589 "rw_mbytes_per_sec": 0, 00:23:18.589 "r_mbytes_per_sec": 0, 00:23:18.589 "w_mbytes_per_sec": 0 00:23:18.589 }, 00:23:18.589 "claimed": false, 00:23:18.589 "zoned": false, 00:23:18.589 "supported_io_types": { 00:23:18.589 "read": true, 00:23:18.589 "write": true, 00:23:18.589 "unmap": true, 00:23:18.589 "flush": false, 00:23:18.589 "reset": true, 00:23:18.589 "nvme_admin": false, 00:23:18.589 "nvme_io": false, 00:23:18.589 "nvme_io_md": false, 00:23:18.589 "write_zeroes": true, 00:23:18.589 "zcopy": false, 00:23:18.589 "get_zone_info": false, 00:23:18.589 "zone_management": false, 00:23:18.589 "zone_append": false, 00:23:18.589 "compare": false, 00:23:18.589 "compare_and_write": false, 00:23:18.589 "abort": false, 00:23:18.589 "seek_hole": true, 00:23:18.589 "seek_data": true, 00:23:18.589 "copy": false, 00:23:18.589 "nvme_iov_md": false 00:23:18.589 }, 00:23:18.589 "driver_specific": { 00:23:18.589 "lvol": { 00:23:18.589 "lvol_store_uuid": "5a4888ba-06af-44a5-9a76-df5fd4ac74de", 00:23:18.589 "base_bdev": "nvme0n1", 00:23:18.589 "thin_provision": true, 00:23:18.589 "num_allocated_clusters": 0, 00:23:18.589 "snapshot": false, 00:23:18.589 "clone": false, 00:23:18.589 "esnap_clone": false 00:23:18.589 } 00:23:18.589 } 00:23:18.589 } 00:23:18.589 ]' 00:23:18.589 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:18.589 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:18.589 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:18.589 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:18.589 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:18.589 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:18.848 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:18.848 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:18.848 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:19.106 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:19.106 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:19.106 02:34:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:19.106 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:19.106 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:19.106 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:19.106 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:19.106 02:34:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:19.106 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:19.106 { 00:23:19.106 "name": "b01c906a-6e54-4a42-b733-f51cbb625c4e", 00:23:19.106 "aliases": [ 00:23:19.106 "lvs/nvme0n1p0" 00:23:19.106 ], 00:23:19.106 "product_name": "Logical Volume", 00:23:19.106 "block_size": 4096, 00:23:19.106 "num_blocks": 26476544, 00:23:19.106 "uuid": "b01c906a-6e54-4a42-b733-f51cbb625c4e", 00:23:19.106 "assigned_rate_limits": { 00:23:19.106 "rw_ios_per_sec": 0, 00:23:19.106 "rw_mbytes_per_sec": 0, 00:23:19.106 "r_mbytes_per_sec": 0, 00:23:19.106 "w_mbytes_per_sec": 0 00:23:19.106 }, 00:23:19.106 "claimed": false, 00:23:19.106 "zoned": false, 00:23:19.106 "supported_io_types": { 00:23:19.106 "read": true, 00:23:19.106 "write": true, 00:23:19.106 "unmap": true, 00:23:19.106 "flush": false, 00:23:19.106 "reset": true, 00:23:19.106 "nvme_admin": false, 00:23:19.106 "nvme_io": false, 00:23:19.106 "nvme_io_md": false, 00:23:19.106 "write_zeroes": true, 00:23:19.106 "zcopy": false, 00:23:19.106 "get_zone_info": false, 00:23:19.106 "zone_management": false, 00:23:19.106 "zone_append": false, 00:23:19.106 "compare": false, 00:23:19.106 "compare_and_write": false, 00:23:19.106 "abort": false, 00:23:19.106 "seek_hole": true, 00:23:19.106 "seek_data": true, 00:23:19.106 "copy": false, 00:23:19.106 "nvme_iov_md": false 00:23:19.106 }, 00:23:19.106 "driver_specific": { 00:23:19.106 "lvol": { 00:23:19.106 "lvol_store_uuid": "5a4888ba-06af-44a5-9a76-df5fd4ac74de", 00:23:19.106 "base_bdev": "nvme0n1", 00:23:19.106 "thin_provision": true, 00:23:19.106 "num_allocated_clusters": 0, 00:23:19.106 "snapshot": false, 00:23:19.106 "clone": false, 00:23:19.106 "esnap_clone": false 00:23:19.106 } 00:23:19.106 } 00:23:19.106 } 00:23:19.106 ]' 00:23:19.106 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:19.106 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:19.106 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:19.366 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b01c906a-6e54-4a42-b733-f51cbb625c4e 00:23:19.624 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:19.624 { 00:23:19.624 "name": "b01c906a-6e54-4a42-b733-f51cbb625c4e", 00:23:19.624 "aliases": [ 00:23:19.624 "lvs/nvme0n1p0" 00:23:19.624 ], 00:23:19.624 "product_name": "Logical Volume", 00:23:19.624 "block_size": 4096, 00:23:19.624 "num_blocks": 26476544, 00:23:19.624 "uuid": "b01c906a-6e54-4a42-b733-f51cbb625c4e", 00:23:19.624 "assigned_rate_limits": { 00:23:19.624 "rw_ios_per_sec": 0, 00:23:19.624 "rw_mbytes_per_sec": 0, 00:23:19.624 "r_mbytes_per_sec": 0, 00:23:19.624 "w_mbytes_per_sec": 0 00:23:19.624 }, 00:23:19.624 "claimed": false, 00:23:19.624 "zoned": false, 00:23:19.624 "supported_io_types": { 00:23:19.624 "read": true, 00:23:19.624 "write": true, 00:23:19.624 "unmap": true, 00:23:19.624 "flush": false, 00:23:19.624 "reset": true, 00:23:19.624 "nvme_admin": false, 00:23:19.624 "nvme_io": false, 00:23:19.624 "nvme_io_md": false, 00:23:19.624 "write_zeroes": true, 00:23:19.624 "zcopy": false, 00:23:19.624 "get_zone_info": false, 00:23:19.624 "zone_management": false, 00:23:19.624 "zone_append": false, 00:23:19.624 "compare": false, 00:23:19.624 "compare_and_write": false, 00:23:19.624 "abort": false, 00:23:19.624 "seek_hole": true, 00:23:19.624 "seek_data": true, 00:23:19.624 "copy": false, 00:23:19.624 "nvme_iov_md": false 00:23:19.624 }, 00:23:19.624 "driver_specific": { 00:23:19.624 "lvol": { 00:23:19.624 "lvol_store_uuid": "5a4888ba-06af-44a5-9a76-df5fd4ac74de", 00:23:19.624 "base_bdev": "nvme0n1", 00:23:19.624 "thin_provision": true, 00:23:19.624 "num_allocated_clusters": 0, 00:23:19.624 "snapshot": false, 00:23:19.624 "clone": false, 00:23:19.624 "esnap_clone": false 00:23:19.624 } 00:23:19.624 } 00:23:19.624 } 00:23:19.624 ]' 00:23:19.624 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:19.624 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:19.624 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:19.624 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:19.624 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:19.625 02:34:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:19.625 02:34:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:19.625 02:34:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b01c906a-6e54-4a42-b733-f51cbb625c4e --l2p_dram_limit 10' 00:23:19.625 02:34:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:19.625 02:34:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:19.625 02:34:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:19.625 02:34:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b01c906a-6e54-4a42-b733-f51cbb625c4e --l2p_dram_limit 10 -c nvc0n1p0 00:23:19.885 [2024-11-04 02:34:06.873590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.885 [2024-11-04 02:34:06.873712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:19.885 [2024-11-04 02:34:06.873731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:19.885 [2024-11-04 02:34:06.873737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.885 [2024-11-04 02:34:06.873785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.885 [2024-11-04 02:34:06.873792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:19.885 [2024-11-04 02:34:06.873801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:19.885 [2024-11-04 02:34:06.873820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.885 [2024-11-04 02:34:06.873840] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:19.885 [2024-11-04 02:34:06.874434] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:19.885 [2024-11-04 02:34:06.874449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.885 [2024-11-04 02:34:06.874455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:19.885 [2024-11-04 02:34:06.874463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:23:19.885 [2024-11-04 02:34:06.874469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.885 [2024-11-04 02:34:06.874518] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 51631818-2bcf-469a-a6d7-de5c91fa5a94 00:23:19.885 [2024-11-04 02:34:06.875457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.885 [2024-11-04 02:34:06.875474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:19.885 [2024-11-04 02:34:06.875482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:19.885 [2024-11-04 02:34:06.875489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.885 [2024-11-04 02:34:06.880460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.885 [2024-11-04 02:34:06.880566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:19.885 [2024-11-04 02:34:06.880578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.937 ms 00:23:19.885 [2024-11-04 02:34:06.880588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.885 [2024-11-04 02:34:06.880657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.885 [2024-11-04 02:34:06.880665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:19.885 [2024-11-04 02:34:06.880672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:19.885 [2024-11-04 02:34:06.880681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.885 [2024-11-04 02:34:06.880718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.885 [2024-11-04 02:34:06.880727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:19.885 [2024-11-04 02:34:06.880733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:19.885 [2024-11-04 02:34:06.880740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.885 [2024-11-04 02:34:06.880758] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:19.885 [2024-11-04 02:34:06.883657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.885 [2024-11-04 02:34:06.883761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:19.885 [2024-11-04 02:34:06.883776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.903 ms 00:23:19.885 [2024-11-04 02:34:06.883786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.885 [2024-11-04 02:34:06.883812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.885 [2024-11-04 02:34:06.883819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:19.885 [2024-11-04 02:34:06.883826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:19.885 [2024-11-04 02:34:06.883832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.885 [2024-11-04 02:34:06.883845] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:19.885 [2024-11-04 02:34:06.883963] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:19.885 [2024-11-04 02:34:06.883976] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:19.885 [2024-11-04 02:34:06.883985] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:19.885 [2024-11-04 02:34:06.883993] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:19.885 [2024-11-04 02:34:06.884000] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:19.885 [2024-11-04 02:34:06.884008] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:19.885 [2024-11-04 02:34:06.884014] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:19.885 [2024-11-04 02:34:06.884021] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:19.885 [2024-11-04 02:34:06.884026] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:19.885 [2024-11-04 02:34:06.884035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.885 [2024-11-04 02:34:06.884041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:19.885 [2024-11-04 02:34:06.884048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:23:19.885 [2024-11-04 02:34:06.884057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.885 [2024-11-04 02:34:06.884123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.885 [2024-11-04 02:34:06.884130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:19.885 [2024-11-04 02:34:06.884137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:19.885 [2024-11-04 02:34:06.884142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.885 [2024-11-04 02:34:06.884216] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:19.885 [2024-11-04 02:34:06.884224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:19.885 [2024-11-04 02:34:06.884232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:19.885 [2024-11-04 02:34:06.884238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.885 [2024-11-04 02:34:06.884245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:19.885 [2024-11-04 02:34:06.884250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:19.885 [2024-11-04 02:34:06.884256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:19.885 [2024-11-04 02:34:06.884261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:19.885 [2024-11-04 02:34:06.884268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:19.885 [2024-11-04 02:34:06.884273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:19.885 [2024-11-04 02:34:06.884279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:19.885 [2024-11-04 02:34:06.884284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:19.885 [2024-11-04 02:34:06.884290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:19.885 [2024-11-04 02:34:06.884295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:19.885 [2024-11-04 02:34:06.884302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:19.885 [2024-11-04 02:34:06.884307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.885 [2024-11-04 02:34:06.884314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:19.885 [2024-11-04 02:34:06.884319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:19.885 [2024-11-04 02:34:06.884325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.885 [2024-11-04 02:34:06.884331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:19.885 [2024-11-04 02:34:06.884339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:19.885 [2024-11-04 02:34:06.884344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.885 [2024-11-04 02:34:06.884350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:19.885 [2024-11-04 02:34:06.884355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:19.885 [2024-11-04 02:34:06.884361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.885 [2024-11-04 02:34:06.884366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:19.885 [2024-11-04 02:34:06.884372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:19.885 [2024-11-04 02:34:06.884377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.885 [2024-11-04 02:34:06.884383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:19.885 [2024-11-04 02:34:06.884388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:19.885 [2024-11-04 02:34:06.884394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.885 [2024-11-04 02:34:06.884399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:19.885 [2024-11-04 02:34:06.884407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:19.885 [2024-11-04 02:34:06.884411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:19.885 [2024-11-04 02:34:06.884417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:19.885 [2024-11-04 02:34:06.884422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:19.885 [2024-11-04 02:34:06.884428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:19.885 [2024-11-04 02:34:06.884433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:19.885 [2024-11-04 02:34:06.884439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:19.885 [2024-11-04 02:34:06.884444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.885 [2024-11-04 02:34:06.884450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:19.885 [2024-11-04 02:34:06.884454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:19.885 [2024-11-04 02:34:06.884460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.886 [2024-11-04 02:34:06.884465] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:19.886 [2024-11-04 02:34:06.884472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:19.886 [2024-11-04 02:34:06.884478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:19.886 [2024-11-04 02:34:06.884485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.886 [2024-11-04 02:34:06.884491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:19.886 [2024-11-04 02:34:06.884499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:19.886 [2024-11-04 02:34:06.884504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:19.886 [2024-11-04 02:34:06.884511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:19.886 [2024-11-04 02:34:06.884515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:19.886 [2024-11-04 02:34:06.884522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:19.886 [2024-11-04 02:34:06.884530] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:19.886 [2024-11-04 02:34:06.884538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:19.886 [2024-11-04 02:34:06.884545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:19.886 [2024-11-04 02:34:06.884552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:19.886 [2024-11-04 02:34:06.884557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:19.886 [2024-11-04 02:34:06.884564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:19.886 [2024-11-04 02:34:06.884569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:19.886 [2024-11-04 02:34:06.884576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:19.886 [2024-11-04 02:34:06.884581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:19.886 [2024-11-04 02:34:06.884588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:19.886 [2024-11-04 02:34:06.884593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:19.886 [2024-11-04 02:34:06.884601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:19.886 [2024-11-04 02:34:06.884607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:19.886 [2024-11-04 02:34:06.884613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:19.886 [2024-11-04 02:34:06.884618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:19.886 [2024-11-04 02:34:06.884625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:19.886 [2024-11-04 02:34:06.884630] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:19.886 [2024-11-04 02:34:06.884637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:19.886 [2024-11-04 02:34:06.884646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:19.886 [2024-11-04 02:34:06.884652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:19.886 [2024-11-04 02:34:06.884657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:19.886 [2024-11-04 02:34:06.884664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:19.886 [2024-11-04 02:34:06.884669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.886 [2024-11-04 02:34:06.884676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:19.886 [2024-11-04 02:34:06.884685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:23:19.886 [2024-11-04 02:34:06.884692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.886 [2024-11-04 02:34:06.884737] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:19.886 [2024-11-04 02:34:06.884748] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:24.103 [2024-11-04 02:34:10.628036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.103 [2024-11-04 02:34:10.628130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:24.103 [2024-11-04 02:34:10.628150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3743.281 ms 00:23:24.103 [2024-11-04 02:34:10.628162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.103 [2024-11-04 02:34:10.660844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.103 [2024-11-04 02:34:10.660925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.103 [2024-11-04 02:34:10.660941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.426 ms 00:23:24.103 [2024-11-04 02:34:10.660952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.103 [2024-11-04 02:34:10.661103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.103 [2024-11-04 02:34:10.661118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:24.103 [2024-11-04 02:34:10.661128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:24.103 [2024-11-04 02:34:10.661142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.103 [2024-11-04 02:34:10.697124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.103 [2024-11-04 02:34:10.697176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.103 [2024-11-04 02:34:10.697189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.945 ms 00:23:24.103 [2024-11-04 02:34:10.697200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.103 [2024-11-04 02:34:10.697238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.103 [2024-11-04 02:34:10.697251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:24.103 [2024-11-04 02:34:10.697260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:24.103 [2024-11-04 02:34:10.697273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.103 [2024-11-04 02:34:10.697926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.103 [2024-11-04 02:34:10.697955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:24.103 [2024-11-04 02:34:10.697966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:23:24.103 [2024-11-04 02:34:10.697977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.103 [2024-11-04 02:34:10.698097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.103 [2024-11-04 02:34:10.698110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:24.103 [2024-11-04 02:34:10.698119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:24.103 [2024-11-04 02:34:10.698132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.103 [2024-11-04 02:34:10.716462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.103 [2024-11-04 02:34:10.716685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:24.103 [2024-11-04 02:34:10.716707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.306 ms 00:23:24.103 [2024-11-04 02:34:10.716720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.103 [2024-11-04 02:34:10.730597] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:24.103 [2024-11-04 02:34:10.734657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.103 [2024-11-04 02:34:10.734884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:24.103 [2024-11-04 02:34:10.734912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.821 ms 00:23:24.103 [2024-11-04 02:34:10.734921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.103 [2024-11-04 02:34:10.842353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.103 [2024-11-04 02:34:10.842421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:24.103 [2024-11-04 02:34:10.842442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.389 ms 00:23:24.103 [2024-11-04 02:34:10.842452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.103 [2024-11-04 02:34:10.842672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.103 [2024-11-04 02:34:10.842686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:24.103 [2024-11-04 02:34:10.842702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:23:24.103 [2024-11-04 02:34:10.842713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.104 [2024-11-04 02:34:10.869378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.104 [2024-11-04 02:34:10.869586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:24.104 [2024-11-04 02:34:10.869617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.603 ms 00:23:24.104 [2024-11-04 02:34:10.869626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.104 [2024-11-04 02:34:10.895711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.104 [2024-11-04 02:34:10.895763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:24.104 [2024-11-04 02:34:10.895780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.008 ms 00:23:24.104 [2024-11-04 02:34:10.895790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.104 [2024-11-04 02:34:10.896454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.104 [2024-11-04 02:34:10.896479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:24.104 [2024-11-04 02:34:10.896491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:23:24.104 [2024-11-04 02:34:10.896499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.104 [2024-11-04 02:34:10.980048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.104 [2024-11-04 02:34:10.980101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:24.104 [2024-11-04 02:34:10.980121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.478 ms 00:23:24.104 [2024-11-04 02:34:10.980130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.104 [2024-11-04 02:34:11.008888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.104 [2024-11-04 02:34:11.008930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:24.104 [2024-11-04 02:34:11.008949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.632 ms 00:23:24.104 [2024-11-04 02:34:11.008957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.104 [2024-11-04 02:34:11.035935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.104 [2024-11-04 02:34:11.035982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:24.104 [2024-11-04 02:34:11.035998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.917 ms 00:23:24.104 [2024-11-04 02:34:11.036006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.104 [2024-11-04 02:34:11.062764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.104 [2024-11-04 02:34:11.062811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:24.104 [2024-11-04 02:34:11.062827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.696 ms 00:23:24.104 [2024-11-04 02:34:11.062836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.104 [2024-11-04 02:34:11.063057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.104 [2024-11-04 02:34:11.063096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:24.104 [2024-11-04 02:34:11.063186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:24.104 [2024-11-04 02:34:11.063197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.104 [2024-11-04 02:34:11.063318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.104 [2024-11-04 02:34:11.063328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:24.104 [2024-11-04 02:34:11.063341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:24.104 [2024-11-04 02:34:11.063348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.104 [2024-11-04 02:34:11.064558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4190.464 ms, result 0 00:23:24.104 { 00:23:24.104 "name": "ftl0", 00:23:24.104 "uuid": "51631818-2bcf-469a-a6d7-de5c91fa5a94" 00:23:24.104 } 00:23:24.104 02:34:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:24.104 02:34:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:24.365 02:34:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:24.365 02:34:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:24.365 02:34:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:24.626 /dev/nbd0 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:24.627 1+0 records in 00:23:24.627 1+0 records out 00:23:24.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704367 s, 5.8 MB/s 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:23:24.627 02:34:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:24.627 [2024-11-04 02:34:11.632947] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:23:24.627 [2024-11-04 02:34:11.633279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77701 ] 00:23:24.888 [2024-11-04 02:34:11.799217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.889 [2024-11-04 02:34:11.922583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.278  [2024-11-04T02:34:14.326Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-04T02:34:15.263Z] Copying: 413/1024 [MB] (223 MBps) [2024-11-04T02:34:16.199Z] Copying: 676/1024 [MB] (263 MBps) [2024-11-04T02:34:16.765Z] Copying: 930/1024 [MB] (253 MBps) [2024-11-04T02:34:17.333Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:23:30.222 00:23:30.222 02:34:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:32.125 02:34:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:32.125 [2024-11-04 02:34:19.205447] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:23:32.125 [2024-11-04 02:34:19.206384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77784 ] 00:23:32.384 [2024-11-04 02:34:19.362705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.384 [2024-11-04 02:34:19.455242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.763  [2024-11-04T02:34:21.867Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-04T02:34:22.805Z] Copying: 56/1024 [MB] (29 MBps) [2024-11-04T02:34:23.747Z] Copying: 75/1024 [MB] (19 MBps) [2024-11-04T02:34:24.687Z] Copying: 88/1024 [MB] (13 MBps) [2024-11-04T02:34:26.074Z] Copying: 104/1024 [MB] (15 MBps) [2024-11-04T02:34:27.018Z] Copying: 121/1024 [MB] (17 MBps) [2024-11-04T02:34:27.958Z] Copying: 137/1024 [MB] (15 MBps) [2024-11-04T02:34:28.900Z] Copying: 154/1024 [MB] (17 MBps) [2024-11-04T02:34:29.843Z] Copying: 173/1024 [MB] (19 MBps) [2024-11-04T02:34:30.784Z] Copying: 193/1024 [MB] (20 MBps) [2024-11-04T02:34:31.727Z] Copying: 211/1024 [MB] (17 MBps) [2024-11-04T02:34:32.671Z] Copying: 228/1024 [MB] (17 MBps) [2024-11-04T02:34:34.059Z] Copying: 246/1024 [MB] (17 MBps) [2024-11-04T02:34:34.997Z] Copying: 265/1024 [MB] (18 MBps) [2024-11-04T02:34:35.930Z] Copying: 282/1024 [MB] (16 MBps) [2024-11-04T02:34:36.873Z] Copying: 315/1024 [MB] (33 MBps) [2024-11-04T02:34:37.816Z] Copying: 335/1024 [MB] (20 MBps) [2024-11-04T02:34:38.759Z] Copying: 348/1024 [MB] (13 MBps) [2024-11-04T02:34:39.717Z] Copying: 363/1024 [MB] (14 MBps) [2024-11-04T02:34:41.108Z] Copying: 378/1024 [MB] (14 MBps) [2024-11-04T02:34:41.678Z] Copying: 397/1024 [MB] (19 MBps) [2024-11-04T02:34:43.062Z] Copying: 413/1024 [MB] (16 MBps) [2024-11-04T02:34:44.004Z] Copying: 432/1024 [MB] (18 MBps) [2024-11-04T02:34:44.947Z] Copying: 451/1024 [MB] (19 MBps) [2024-11-04T02:34:45.891Z] Copying: 469/1024 [MB] (17 MBps) [2024-11-04T02:34:46.828Z] Copying: 487/1024 [MB] (17 MBps) [2024-11-04T02:34:47.771Z] Copying: 514/1024 [MB] (27 MBps) [2024-11-04T02:34:48.714Z] Copying: 530/1024 [MB] (15 MBps) [2024-11-04T02:34:50.100Z] Copying: 545/1024 [MB] (14 MBps) [2024-11-04T02:34:50.677Z] Copying: 561/1024 [MB] (15 MBps) [2024-11-04T02:34:52.050Z] Copying: 574/1024 [MB] (13 MBps) [2024-11-04T02:34:52.982Z] Copying: 609/1024 [MB] (34 MBps) [2024-11-04T02:34:53.915Z] Copying: 644/1024 [MB] (34 MBps) [2024-11-04T02:34:54.849Z] Copying: 678/1024 [MB] (34 MBps) [2024-11-04T02:34:55.784Z] Copying: 712/1024 [MB] (34 MBps) [2024-11-04T02:34:56.757Z] Copying: 744/1024 [MB] (31 MBps) [2024-11-04T02:34:57.697Z] Copying: 757/1024 [MB] (13 MBps) [2024-11-04T02:34:59.073Z] Copying: 778/1024 [MB] (20 MBps) [2024-11-04T02:35:00.018Z] Copying: 814/1024 [MB] (36 MBps) [2024-11-04T02:35:00.962Z] Copying: 830/1024 [MB] (16 MBps) [2024-11-04T02:35:01.902Z] Copying: 845/1024 [MB] (15 MBps) [2024-11-04T02:35:02.846Z] Copying: 859/1024 [MB] (13 MBps) [2024-11-04T02:35:03.786Z] Copying: 873/1024 [MB] (14 MBps) [2024-11-04T02:35:04.720Z] Copying: 890/1024 [MB] (16 MBps) [2024-11-04T02:35:06.094Z] Copying: 912/1024 [MB] (22 MBps) [2024-11-04T02:35:07.027Z] Copying: 948/1024 [MB] (36 MBps) [2024-11-04T02:35:07.970Z] Copying: 983/1024 [MB] (34 MBps) [2024-11-04T02:35:08.912Z] Copying: 1007/1024 [MB] (24 MBps) [2024-11-04T02:35:08.912Z] Copying: 1023/1024 [MB] (15 MBps) [2024-11-04T02:35:09.483Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:24:22.372 00:24:22.372 02:35:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:22.372 02:35:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:22.631 02:35:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:22.631 [2024-11-04 02:35:09.712802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.631 [2024-11-04 02:35:09.712842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:22.631 [2024-11-04 02:35:09.712853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:22.631 [2024-11-04 02:35:09.712861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.631 [2024-11-04 02:35:09.712892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:22.631 [2024-11-04 02:35:09.714985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.631 [2024-11-04 02:35:09.715011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:22.631 [2024-11-04 02:35:09.715021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:24:22.631 [2024-11-04 02:35:09.715027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.631 [2024-11-04 02:35:09.716851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.631 [2024-11-04 02:35:09.716884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:22.631 [2024-11-04 02:35:09.716893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.802 ms 00:24:22.631 [2024-11-04 02:35:09.716899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.631 [2024-11-04 02:35:09.730688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.631 [2024-11-04 02:35:09.730804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:22.631 [2024-11-04 02:35:09.730820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.773 ms 00:24:22.631 [2024-11-04 02:35:09.730829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.631 [2024-11-04 02:35:09.735601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.631 [2024-11-04 02:35:09.735623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:22.631 [2024-11-04 02:35:09.735632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.745 ms 00:24:22.631 [2024-11-04 02:35:09.735638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.889 [2024-11-04 02:35:09.753431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.889 [2024-11-04 02:35:09.753531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:22.889 [2024-11-04 02:35:09.753546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.732 ms 00:24:22.889 [2024-11-04 02:35:09.753552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.889 [2024-11-04 02:35:09.765250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.889 [2024-11-04 02:35:09.765276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:22.889 [2024-11-04 02:35:09.765286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.669 ms 00:24:22.890 [2024-11-04 02:35:09.765293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.890 [2024-11-04 02:35:09.765404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.890 [2024-11-04 02:35:09.765412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:22.890 [2024-11-04 02:35:09.765420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:22.890 [2024-11-04 02:35:09.765426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.890 [2024-11-04 02:35:09.782997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.890 [2024-11-04 02:35:09.783022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:22.890 [2024-11-04 02:35:09.783031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.555 ms 00:24:22.890 [2024-11-04 02:35:09.783036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.890 [2024-11-04 02:35:09.800411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.890 [2024-11-04 02:35:09.800507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:22.890 [2024-11-04 02:35:09.800521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.344 ms 00:24:22.890 [2024-11-04 02:35:09.800527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.890 [2024-11-04 02:35:09.817319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.890 [2024-11-04 02:35:09.817344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:22.890 [2024-11-04 02:35:09.817353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.763 ms 00:24:22.890 [2024-11-04 02:35:09.817358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.890 [2024-11-04 02:35:09.834397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.890 [2024-11-04 02:35:09.834421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:22.890 [2024-11-04 02:35:09.834430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.982 ms 00:24:22.890 [2024-11-04 02:35:09.834435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.890 [2024-11-04 02:35:09.834462] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:22.890 [2024-11-04 02:35:09.834473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:22.890 [2024-11-04 02:35:09.834945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.834954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.834960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.834966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.834972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.834979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.834985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.834992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.834998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:22.891 [2024-11-04 02:35:09.835170] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:22.891 [2024-11-04 02:35:09.835177] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 51631818-2bcf-469a-a6d7-de5c91fa5a94 00:24:22.891 [2024-11-04 02:35:09.835184] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:22.891 [2024-11-04 02:35:09.835193] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:22.891 [2024-11-04 02:35:09.835198] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:22.891 [2024-11-04 02:35:09.835213] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:22.891 [2024-11-04 02:35:09.835218] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:22.891 [2024-11-04 02:35:09.835227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:22.891 [2024-11-04 02:35:09.835232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:22.891 [2024-11-04 02:35:09.835239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:22.891 [2024-11-04 02:35:09.835244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:22.891 [2024-11-04 02:35:09.835250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.891 [2024-11-04 02:35:09.835256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:22.891 [2024-11-04 02:35:09.835264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:24:22.891 [2024-11-04 02:35:09.835270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.844755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.891 [2024-11-04 02:35:09.844781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:22.891 [2024-11-04 02:35:09.844790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.460 ms 00:24:22.891 [2024-11-04 02:35:09.844797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.845069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.891 [2024-11-04 02:35:09.845081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:22.891 [2024-11-04 02:35:09.845089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:24:22.891 [2024-11-04 02:35:09.845096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.878114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.878222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.891 [2024-11-04 02:35:09.878237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.878245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.878289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.878296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.891 [2024-11-04 02:35:09.878304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.878309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.878363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.878371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.891 [2024-11-04 02:35:09.878378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.878384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.878402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.878408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.891 [2024-11-04 02:35:09.878415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.878420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.936249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.936283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.891 [2024-11-04 02:35:09.936293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.936301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.984397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.984429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.891 [2024-11-04 02:35:09.984439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.984445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.984501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.984508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.891 [2024-11-04 02:35:09.984516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.984522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.984571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.984579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.891 [2024-11-04 02:35:09.984586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.984592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.984660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.984667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.891 [2024-11-04 02:35:09.984675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.984680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.984706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.984714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:22.891 [2024-11-04 02:35:09.984722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.984728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.984757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.984764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.891 [2024-11-04 02:35:09.984771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.984777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.891 [2024-11-04 02:35:09.984815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.891 [2024-11-04 02:35:09.984822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.891 [2024-11-04 02:35:09.984830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.891 [2024-11-04 02:35:09.984835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.892 [2024-11-04 02:35:09.984967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 272.126 ms, result 0 00:24:22.892 true 00:24:23.149 02:35:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77559 00:24:23.149 02:35:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77559 00:24:23.149 02:35:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:23.149 [2024-11-04 02:35:10.069992] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:24:23.149 [2024-11-04 02:35:10.070256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78313 ] 00:24:23.149 [2024-11-04 02:35:10.225461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.408 [2024-11-04 02:35:10.301334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.782  [2024-11-04T02:35:12.827Z] Copying: 259/1024 [MB] (259 MBps) [2024-11-04T02:35:13.806Z] Copying: 520/1024 [MB] (260 MBps) [2024-11-04T02:35:14.742Z] Copying: 777/1024 [MB] (256 MBps) [2024-11-04T02:35:15.314Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:24:28.203 00:24:28.203 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77559 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:28.203 02:35:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:28.203 [2024-11-04 02:35:15.121757] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:24:28.203 [2024-11-04 02:35:15.121899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78376 ] 00:24:28.203 [2024-11-04 02:35:15.281955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.465 [2024-11-04 02:35:15.400734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.727 [2024-11-04 02:35:15.690280] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.727 [2024-11-04 02:35:15.690365] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.727 [2024-11-04 02:35:15.756032] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:28.727 [2024-11-04 02:35:15.756602] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:28.727 [2024-11-04 02:35:15.757249] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:29.302 [2024-11-04 02:35:16.276257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.276314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:29.302 [2024-11-04 02:35:16.276329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:29.302 [2024-11-04 02:35:16.276338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.276397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.276409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:29.302 [2024-11-04 02:35:16.276417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:29.302 [2024-11-04 02:35:16.276424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.276444] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:29.302 [2024-11-04 02:35:16.277191] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:29.302 [2024-11-04 02:35:16.277211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.277219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:29.302 [2024-11-04 02:35:16.277228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:24:29.302 [2024-11-04 02:35:16.277236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.278926] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:29.302 [2024-11-04 02:35:16.292982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.293034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:29.302 [2024-11-04 02:35:16.293048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.058 ms 00:24:29.302 [2024-11-04 02:35:16.293057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.293128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.293138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:29.302 [2024-11-04 02:35:16.293148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:29.302 [2024-11-04 02:35:16.293155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.301533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.301579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:29.302 [2024-11-04 02:35:16.301590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.303 ms 00:24:29.302 [2024-11-04 02:35:16.301598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.301678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.301687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:29.302 [2024-11-04 02:35:16.301695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:29.302 [2024-11-04 02:35:16.301702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.301749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.301759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:29.302 [2024-11-04 02:35:16.301767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:29.302 [2024-11-04 02:35:16.301775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.301798] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:29.302 [2024-11-04 02:35:16.305770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.305811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:29.302 [2024-11-04 02:35:16.305823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.977 ms 00:24:29.302 [2024-11-04 02:35:16.305831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.305882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.305893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:29.302 [2024-11-04 02:35:16.305902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:29.302 [2024-11-04 02:35:16.305910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.305967] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:29.302 [2024-11-04 02:35:16.305992] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:29.302 [2024-11-04 02:35:16.306029] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:29.302 [2024-11-04 02:35:16.306047] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:29.302 [2024-11-04 02:35:16.306153] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:29.302 [2024-11-04 02:35:16.306165] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:29.302 [2024-11-04 02:35:16.306177] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:29.302 [2024-11-04 02:35:16.306187] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:29.302 [2024-11-04 02:35:16.306199] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:29.302 [2024-11-04 02:35:16.306208] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:29.302 [2024-11-04 02:35:16.306216] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:29.302 [2024-11-04 02:35:16.306223] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:29.302 [2024-11-04 02:35:16.306231] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:29.302 [2024-11-04 02:35:16.306239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.306247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:29.302 [2024-11-04 02:35:16.306254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:24:29.302 [2024-11-04 02:35:16.306261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.306348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.302 [2024-11-04 02:35:16.306359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:29.302 [2024-11-04 02:35:16.306367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:29.302 [2024-11-04 02:35:16.306374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.302 [2024-11-04 02:35:16.306478] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:29.302 [2024-11-04 02:35:16.306497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:29.302 [2024-11-04 02:35:16.306505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:29.302 [2024-11-04 02:35:16.306513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.302 [2024-11-04 02:35:16.306521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:29.302 [2024-11-04 02:35:16.306528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:29.302 [2024-11-04 02:35:16.306536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:29.302 [2024-11-04 02:35:16.306543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:29.302 [2024-11-04 02:35:16.306550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:29.302 [2024-11-04 02:35:16.306557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:29.302 [2024-11-04 02:35:16.306564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:29.302 [2024-11-04 02:35:16.306581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:29.302 [2024-11-04 02:35:16.306588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:29.302 [2024-11-04 02:35:16.306595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:29.302 [2024-11-04 02:35:16.306602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:29.302 [2024-11-04 02:35:16.306609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.302 [2024-11-04 02:35:16.306616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:29.302 [2024-11-04 02:35:16.306623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:29.303 [2024-11-04 02:35:16.306629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.303 [2024-11-04 02:35:16.306636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:29.303 [2024-11-04 02:35:16.306643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:29.303 [2024-11-04 02:35:16.306650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.303 [2024-11-04 02:35:16.306657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:29.303 [2024-11-04 02:35:16.306664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:29.303 [2024-11-04 02:35:16.306670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.303 [2024-11-04 02:35:16.306677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:29.303 [2024-11-04 02:35:16.306684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:29.303 [2024-11-04 02:35:16.306690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.303 [2024-11-04 02:35:16.306697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:29.303 [2024-11-04 02:35:16.306703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:29.303 [2024-11-04 02:35:16.306709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:29.303 [2024-11-04 02:35:16.306716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:29.303 [2024-11-04 02:35:16.306723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:29.303 [2024-11-04 02:35:16.306729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:29.303 [2024-11-04 02:35:16.306735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:29.303 [2024-11-04 02:35:16.306742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:29.303 [2024-11-04 02:35:16.306748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:29.303 [2024-11-04 02:35:16.306755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:29.303 [2024-11-04 02:35:16.306762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:29.303 [2024-11-04 02:35:16.306768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.303 [2024-11-04 02:35:16.306775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:29.303 [2024-11-04 02:35:16.306781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:29.303 [2024-11-04 02:35:16.306788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.303 [2024-11-04 02:35:16.306795] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:29.303 [2024-11-04 02:35:16.306803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:29.303 [2024-11-04 02:35:16.306811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:29.303 [2024-11-04 02:35:16.306821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:29.303 [2024-11-04 02:35:16.306830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:29.303 [2024-11-04 02:35:16.306837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:29.303 [2024-11-04 02:35:16.306844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:29.303 [2024-11-04 02:35:16.306851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:29.303 [2024-11-04 02:35:16.306857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:29.303 [2024-11-04 02:35:16.306882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:29.303 [2024-11-04 02:35:16.306891] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:29.303 [2024-11-04 02:35:16.306901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:29.303 [2024-11-04 02:35:16.306910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:29.303 [2024-11-04 02:35:16.306918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:29.303 [2024-11-04 02:35:16.306925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:29.303 [2024-11-04 02:35:16.306932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:29.303 [2024-11-04 02:35:16.306940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:29.303 [2024-11-04 02:35:16.306947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:29.303 [2024-11-04 02:35:16.306954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:29.303 [2024-11-04 02:35:16.306961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:29.303 [2024-11-04 02:35:16.306969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:29.303 [2024-11-04 02:35:16.306976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:29.303 [2024-11-04 02:35:16.306983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:29.303 [2024-11-04 02:35:16.306989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:29.303 [2024-11-04 02:35:16.306996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:29.303 [2024-11-04 02:35:16.307003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:29.303 [2024-11-04 02:35:16.307010] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:29.303 [2024-11-04 02:35:16.307018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:29.303 [2024-11-04 02:35:16.307027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:29.303 [2024-11-04 02:35:16.307034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:29.303 [2024-11-04 02:35:16.307041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:29.303 [2024-11-04 02:35:16.307050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:29.303 [2024-11-04 02:35:16.307059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-11-04 02:35:16.307067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:29.303 [2024-11-04 02:35:16.307075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:24:29.303 [2024-11-04 02:35:16.307083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-11-04 02:35:16.338761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-11-04 02:35:16.338989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:29.303 [2024-11-04 02:35:16.339010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.634 ms 00:24:29.303 [2024-11-04 02:35:16.339020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-11-04 02:35:16.339114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-11-04 02:35:16.339131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:29.303 [2024-11-04 02:35:16.339140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:29.303 [2024-11-04 02:35:16.339148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-11-04 02:35:16.388650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-11-04 02:35:16.388703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:29.303 [2024-11-04 02:35:16.388717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.440 ms 00:24:29.303 [2024-11-04 02:35:16.388729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-11-04 02:35:16.388780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-11-04 02:35:16.388790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:29.303 [2024-11-04 02:35:16.388799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:29.303 [2024-11-04 02:35:16.388808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-11-04 02:35:16.389394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-11-04 02:35:16.389418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:29.303 [2024-11-04 02:35:16.389429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:24:29.303 [2024-11-04 02:35:16.389437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-11-04 02:35:16.389596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-11-04 02:35:16.389619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:29.303 [2024-11-04 02:35:16.389628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:29.303 [2024-11-04 02:35:16.389636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.303 [2024-11-04 02:35:16.405425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.303 [2024-11-04 02:35:16.405579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:29.303 [2024-11-04 02:35:16.405640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.768 ms 00:24:29.303 [2024-11-04 02:35:16.405663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.419990] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:29.566 [2024-11-04 02:35:16.420164] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:29.566 [2024-11-04 02:35:16.420233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.420255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:29.566 [2024-11-04 02:35:16.420277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.432 ms 00:24:29.566 [2024-11-04 02:35:16.420300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.446179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.446336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:29.566 [2024-11-04 02:35:16.446412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.827 ms 00:24:29.566 [2024-11-04 02:35:16.446436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.459530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.459719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:29.566 [2024-11-04 02:35:16.459787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.696 ms 00:24:29.566 [2024-11-04 02:35:16.459810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.472306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.472483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:29.566 [2024-11-04 02:35:16.472549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.156 ms 00:24:29.566 [2024-11-04 02:35:16.472572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.473354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.473487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:29.566 [2024-11-04 02:35:16.473550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:24:29.566 [2024-11-04 02:35:16.473574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.536827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.537088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:29.566 [2024-11-04 02:35:16.537115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.217 ms 00:24:29.566 [2024-11-04 02:35:16.537124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.548122] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:29.566 [2024-11-04 02:35:16.551125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.551169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:29.566 [2024-11-04 02:35:16.551181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.878 ms 00:24:29.566 [2024-11-04 02:35:16.551191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.551287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.551299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:29.566 [2024-11-04 02:35:16.551309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:29.566 [2024-11-04 02:35:16.551318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.551392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.551403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:29.566 [2024-11-04 02:35:16.551412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:29.566 [2024-11-04 02:35:16.551421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.551443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.551457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:29.566 [2024-11-04 02:35:16.551466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:29.566 [2024-11-04 02:35:16.551474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.551511] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:29.566 [2024-11-04 02:35:16.551522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.551531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:29.566 [2024-11-04 02:35:16.551539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:29.566 [2024-11-04 02:35:16.551551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.577428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.577586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:29.566 [2024-11-04 02:35:16.577649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.857 ms 00:24:29.566 [2024-11-04 02:35:16.577673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.577764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.566 [2024-11-04 02:35:16.577791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:29.566 [2024-11-04 02:35:16.577812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:29.566 [2024-11-04 02:35:16.577831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.566 [2024-11-04 02:35:16.579158] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.379 ms, result 0 00:24:30.512  [2024-11-04T02:35:19.007Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-04T02:35:19.951Z] Copying: 44/1024 [MB] (23 MBps) [2024-11-04T02:35:20.891Z] Copying: 58/1024 [MB] (14 MBps) [2024-11-04T02:35:21.824Z] Copying: 74/1024 [MB] (16 MBps) [2024-11-04T02:35:22.760Z] Copying: 113/1024 [MB] (38 MBps) [2024-11-04T02:35:23.707Z] Copying: 160/1024 [MB] (46 MBps) [2024-11-04T02:35:24.654Z] Copying: 181/1024 [MB] (21 MBps) [2024-11-04T02:35:25.624Z] Copying: 192/1024 [MB] (10 MBps) [2024-11-04T02:35:27.015Z] Copying: 207/1024 [MB] (15 MBps) [2024-11-04T02:35:27.961Z] Copying: 217/1024 [MB] (10 MBps) [2024-11-04T02:35:28.906Z] Copying: 240/1024 [MB] (23 MBps) [2024-11-04T02:35:29.841Z] Copying: 261/1024 [MB] (20 MBps) [2024-11-04T02:35:30.793Z] Copying: 284/1024 [MB] (23 MBps) [2024-11-04T02:35:31.750Z] Copying: 326/1024 [MB] (41 MBps) [2024-11-04T02:35:32.698Z] Copying: 362/1024 [MB] (36 MBps) [2024-11-04T02:35:33.640Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-04T02:35:35.028Z] Copying: 386/1024 [MB] (13 MBps) [2024-11-04T02:35:35.601Z] Copying: 397/1024 [MB] (10 MBps) [2024-11-04T02:35:36.988Z] Copying: 407/1024 [MB] (10 MBps) [2024-11-04T02:35:37.934Z] Copying: 420/1024 [MB] (13 MBps) [2024-11-04T02:35:38.880Z] Copying: 430/1024 [MB] (10 MBps) [2024-11-04T02:35:39.824Z] Copying: 451208/1048576 [kB] (10120 kBps) [2024-11-04T02:35:40.767Z] Copying: 450/1024 [MB] (10 MBps) [2024-11-04T02:35:41.704Z] Copying: 461/1024 [MB] (10 MBps) [2024-11-04T02:35:42.648Z] Copying: 474/1024 [MB] (12 MBps) [2024-11-04T02:35:44.035Z] Copying: 484/1024 [MB] (10 MBps) [2024-11-04T02:35:44.608Z] Copying: 495/1024 [MB] (11 MBps) [2024-11-04T02:35:45.995Z] Copying: 505/1024 [MB] (10 MBps) [2024-11-04T02:35:46.944Z] Copying: 516/1024 [MB] (10 MBps) [2024-11-04T02:35:47.901Z] Copying: 538/1024 [MB] (21 MBps) [2024-11-04T02:35:48.851Z] Copying: 559/1024 [MB] (20 MBps) [2024-11-04T02:35:49.808Z] Copying: 572/1024 [MB] (13 MBps) [2024-11-04T02:35:50.760Z] Copying: 586/1024 [MB] (13 MBps) [2024-11-04T02:35:51.701Z] Copying: 597/1024 [MB] (11 MBps) [2024-11-04T02:35:52.643Z] Copying: 613/1024 [MB] (15 MBps) [2024-11-04T02:35:54.028Z] Copying: 628/1024 [MB] (15 MBps) [2024-11-04T02:35:54.594Z] Copying: 639/1024 [MB] (10 MBps) [2024-11-04T02:35:55.980Z] Copying: 678/1024 [MB] (39 MBps) [2024-11-04T02:35:56.921Z] Copying: 700/1024 [MB] (22 MBps) [2024-11-04T02:35:57.860Z] Copying: 719/1024 [MB] (18 MBps) [2024-11-04T02:35:58.802Z] Copying: 739/1024 [MB] (20 MBps) [2024-11-04T02:35:59.759Z] Copying: 761/1024 [MB] (21 MBps) [2024-11-04T02:36:00.704Z] Copying: 779/1024 [MB] (18 MBps) [2024-11-04T02:36:01.639Z] Copying: 801/1024 [MB] (21 MBps) [2024-11-04T02:36:03.026Z] Copying: 847/1024 [MB] (46 MBps) [2024-11-04T02:36:03.596Z] Copying: 867/1024 [MB] (20 MBps) [2024-11-04T02:36:04.983Z] Copying: 884/1024 [MB] (16 MBps) [2024-11-04T02:36:05.964Z] Copying: 906/1024 [MB] (21 MBps) [2024-11-04T02:36:06.908Z] Copying: 923/1024 [MB] (17 MBps) [2024-11-04T02:36:07.851Z] Copying: 937/1024 [MB] (14 MBps) [2024-11-04T02:36:08.792Z] Copying: 956/1024 [MB] (18 MBps) [2024-11-04T02:36:09.734Z] Copying: 974/1024 [MB] (17 MBps) [2024-11-04T02:36:10.677Z] Copying: 990/1024 [MB] (16 MBps) [2024-11-04T02:36:11.622Z] Copying: 1009/1024 [MB] (18 MBps) [2024-11-04T02:36:12.562Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-04T02:36:12.562Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-04 02:36:12.477707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.451 [2024-11-04 02:36:12.477790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:25.451 [2024-11-04 02:36:12.477807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:25.451 [2024-11-04 02:36:12.477818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.451 [2024-11-04 02:36:12.478023] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:25.451 [2024-11-04 02:36:12.484444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.451 [2024-11-04 02:36:12.484669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:25.451 [2024-11-04 02:36:12.484694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.394 ms 00:25:25.451 [2024-11-04 02:36:12.484703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.451 [2024-11-04 02:36:12.495276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.451 [2024-11-04 02:36:12.495382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:25.451 [2024-11-04 02:36:12.495401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.524 ms 00:25:25.451 [2024-11-04 02:36:12.495410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.451 [2024-11-04 02:36:12.520909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.451 [2024-11-04 02:36:12.520966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:25.451 [2024-11-04 02:36:12.520982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.478 ms 00:25:25.451 [2024-11-04 02:36:12.520993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.451 [2024-11-04 02:36:12.527171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.451 [2024-11-04 02:36:12.527226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:25.451 [2024-11-04 02:36:12.527238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:25:25.451 [2024-11-04 02:36:12.527246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.451 [2024-11-04 02:36:12.554279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.451 [2024-11-04 02:36:12.554450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:25.451 [2024-11-04 02:36:12.554471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.984 ms 00:25:25.451 [2024-11-04 02:36:12.554480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.712 [2024-11-04 02:36:12.570543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.712 [2024-11-04 02:36:12.570596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:25.712 [2024-11-04 02:36:12.570611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.988 ms 00:25:25.712 [2024-11-04 02:36:12.570619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.975 [2024-11-04 02:36:12.871405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.975 [2024-11-04 02:36:12.871479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:25.975 [2024-11-04 02:36:12.871503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 300.726 ms 00:25:25.975 [2024-11-04 02:36:12.871513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.975 [2024-11-04 02:36:12.898536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.975 [2024-11-04 02:36:12.898591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:25.975 [2024-11-04 02:36:12.898605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.006 ms 00:25:25.975 [2024-11-04 02:36:12.898612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.975 [2024-11-04 02:36:12.924769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.975 [2024-11-04 02:36:12.924820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:25.975 [2024-11-04 02:36:12.924834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.107 ms 00:25:25.975 [2024-11-04 02:36:12.924841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.975 [2024-11-04 02:36:12.950488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.975 [2024-11-04 02:36:12.950539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:25.975 [2024-11-04 02:36:12.950552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.432 ms 00:25:25.975 [2024-11-04 02:36:12.950560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.975 [2024-11-04 02:36:12.976193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.975 [2024-11-04 02:36:12.976417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:25.975 [2024-11-04 02:36:12.976440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.552 ms 00:25:25.975 [2024-11-04 02:36:12.976448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.975 [2024-11-04 02:36:12.976535] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:25.975 [2024-11-04 02:36:12.976553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105728 / 261120 wr_cnt: 1 state: open 00:25:25.975 [2024-11-04 02:36:12.976565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:25.975 [2024-11-04 02:36:12.976941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.976949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.976957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.976965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.976973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.976981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.976991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:25.976 [2024-11-04 02:36:12.977419] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:25.976 [2024-11-04 02:36:12.977428] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 51631818-2bcf-469a-a6d7-de5c91fa5a94 00:25:25.976 [2024-11-04 02:36:12.977437] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105728 00:25:25.976 [2024-11-04 02:36:12.977448] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106688 00:25:25.976 [2024-11-04 02:36:12.977463] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105728 00:25:25.976 [2024-11-04 02:36:12.977472] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:25:25.976 [2024-11-04 02:36:12.977480] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:25.976 [2024-11-04 02:36:12.977488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:25.976 [2024-11-04 02:36:12.977496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:25.976 [2024-11-04 02:36:12.977503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:25.976 [2024-11-04 02:36:12.977510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:25.976 [2024-11-04 02:36:12.977518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.976 [2024-11-04 02:36:12.977526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:25.976 [2024-11-04 02:36:12.977536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:25:25.976 [2024-11-04 02:36:12.977544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.976 [2024-11-04 02:36:12.991768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.976 [2024-11-04 02:36:12.991967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:25.976 [2024-11-04 02:36:12.991986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.186 ms 00:25:25.976 [2024-11-04 02:36:12.991996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.976 [2024-11-04 02:36:12.992382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.976 [2024-11-04 02:36:12.992393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:25.976 [2024-11-04 02:36:12.992402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:25:25.976 [2024-11-04 02:36:12.992418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.976 [2024-11-04 02:36:13.029496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.976 [2024-11-04 02:36:13.029551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:25.976 [2024-11-04 02:36:13.029565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.976 [2024-11-04 02:36:13.029574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.976 [2024-11-04 02:36:13.029641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.976 [2024-11-04 02:36:13.029651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:25.976 [2024-11-04 02:36:13.029661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.976 [2024-11-04 02:36:13.029676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.976 [2024-11-04 02:36:13.029765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.976 [2024-11-04 02:36:13.029778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:25.976 [2024-11-04 02:36:13.029787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.976 [2024-11-04 02:36:13.029796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.976 [2024-11-04 02:36:13.029813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.976 [2024-11-04 02:36:13.029823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:25.976 [2024-11-04 02:36:13.029832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.976 [2024-11-04 02:36:13.029841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.237 [2024-11-04 02:36:13.114558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.237 [2024-11-04 02:36:13.114625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:26.237 [2024-11-04 02:36:13.114639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.237 [2024-11-04 02:36:13.114648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.237 [2024-11-04 02:36:13.183615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.237 [2024-11-04 02:36:13.183704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:26.237 [2024-11-04 02:36:13.183718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.237 [2024-11-04 02:36:13.183727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.237 [2024-11-04 02:36:13.183796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.237 [2024-11-04 02:36:13.183805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:26.237 [2024-11-04 02:36:13.183814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.237 [2024-11-04 02:36:13.183823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.237 [2024-11-04 02:36:13.183901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.237 [2024-11-04 02:36:13.183912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:26.237 [2024-11-04 02:36:13.183921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.237 [2024-11-04 02:36:13.183929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.237 [2024-11-04 02:36:13.184032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.237 [2024-11-04 02:36:13.184050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:26.238 [2024-11-04 02:36:13.184060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.238 [2024-11-04 02:36:13.184068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.238 [2024-11-04 02:36:13.184102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.238 [2024-11-04 02:36:13.184111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:26.238 [2024-11-04 02:36:13.184120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.238 [2024-11-04 02:36:13.184128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.238 [2024-11-04 02:36:13.184167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.238 [2024-11-04 02:36:13.184179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:26.238 [2024-11-04 02:36:13.184188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.238 [2024-11-04 02:36:13.184196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.238 [2024-11-04 02:36:13.184248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.238 [2024-11-04 02:36:13.184259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:26.238 [2024-11-04 02:36:13.184268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.238 [2024-11-04 02:36:13.184276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.238 [2024-11-04 02:36:13.184410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 709.163 ms, result 0 00:25:27.623 00:25:27.623 00:25:27.623 02:36:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:30.167 02:36:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:30.167 [2024-11-04 02:36:16.788569] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:25:30.168 [2024-11-04 02:36:16.788654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79004 ] 00:25:30.168 [2024-11-04 02:36:16.945252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.168 [2024-11-04 02:36:17.047736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.430 [2024-11-04 02:36:17.339166] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:30.430 [2024-11-04 02:36:17.339243] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:30.430 [2024-11-04 02:36:17.498764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.498811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:30.430 [2024-11-04 02:36:17.498827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:30.430 [2024-11-04 02:36:17.498835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.498900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.498911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.430 [2024-11-04 02:36:17.498922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:30.430 [2024-11-04 02:36:17.498929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.498949] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:30.430 [2024-11-04 02:36:17.499609] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:30.430 [2024-11-04 02:36:17.499627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.499635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.430 [2024-11-04 02:36:17.499652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:25:30.430 [2024-11-04 02:36:17.499660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.500794] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:30.430 [2024-11-04 02:36:17.513688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.513722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:30.430 [2024-11-04 02:36:17.513733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.893 ms 00:25:30.430 [2024-11-04 02:36:17.513741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.513797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.513809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:30.430 [2024-11-04 02:36:17.513817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:30.430 [2024-11-04 02:36:17.513824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.519082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.519112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.430 [2024-11-04 02:36:17.519121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.179 ms 00:25:30.430 [2024-11-04 02:36:17.519128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.519203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.519212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.430 [2024-11-04 02:36:17.519220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:30.430 [2024-11-04 02:36:17.519227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.519276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.519287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:30.430 [2024-11-04 02:36:17.519296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:30.430 [2024-11-04 02:36:17.519303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.519323] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:30.430 [2024-11-04 02:36:17.522713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.522740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.430 [2024-11-04 02:36:17.522749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.395 ms 00:25:30.430 [2024-11-04 02:36:17.522759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.522788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.522796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:30.430 [2024-11-04 02:36:17.522804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:30.430 [2024-11-04 02:36:17.522812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.522830] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:30.430 [2024-11-04 02:36:17.522848] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:30.430 [2024-11-04 02:36:17.522891] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:30.430 [2024-11-04 02:36:17.522914] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:30.430 [2024-11-04 02:36:17.523017] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:30.430 [2024-11-04 02:36:17.523028] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:30.430 [2024-11-04 02:36:17.523039] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:30.430 [2024-11-04 02:36:17.523049] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:30.430 [2024-11-04 02:36:17.523058] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:30.430 [2024-11-04 02:36:17.523066] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:30.430 [2024-11-04 02:36:17.523074] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:30.430 [2024-11-04 02:36:17.523082] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:30.430 [2024-11-04 02:36:17.523090] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:30.430 [2024-11-04 02:36:17.523099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.523107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:30.430 [2024-11-04 02:36:17.523115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:25:30.430 [2024-11-04 02:36:17.523123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.523205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.430 [2024-11-04 02:36:17.523213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:30.430 [2024-11-04 02:36:17.523221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:30.430 [2024-11-04 02:36:17.523228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.430 [2024-11-04 02:36:17.523341] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:30.430 [2024-11-04 02:36:17.523355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:30.430 [2024-11-04 02:36:17.523364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.430 [2024-11-04 02:36:17.523373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.430 [2024-11-04 02:36:17.523380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:30.430 [2024-11-04 02:36:17.523386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:30.430 [2024-11-04 02:36:17.523393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:30.430 [2024-11-04 02:36:17.523402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:30.430 [2024-11-04 02:36:17.523410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:30.430 [2024-11-04 02:36:17.523416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.430 [2024-11-04 02:36:17.523423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:30.430 [2024-11-04 02:36:17.523430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:30.431 [2024-11-04 02:36:17.523436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.431 [2024-11-04 02:36:17.523443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:30.431 [2024-11-04 02:36:17.523451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:30.431 [2024-11-04 02:36:17.523464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.431 [2024-11-04 02:36:17.523470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:30.431 [2024-11-04 02:36:17.523477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:30.431 [2024-11-04 02:36:17.523483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.431 [2024-11-04 02:36:17.523490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:30.431 [2024-11-04 02:36:17.523496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:30.431 [2024-11-04 02:36:17.523503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.431 [2024-11-04 02:36:17.523509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:30.431 [2024-11-04 02:36:17.523516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:30.431 [2024-11-04 02:36:17.523522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.431 [2024-11-04 02:36:17.523528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:30.431 [2024-11-04 02:36:17.523534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:30.431 [2024-11-04 02:36:17.523541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.431 [2024-11-04 02:36:17.523547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:30.431 [2024-11-04 02:36:17.523554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:30.431 [2024-11-04 02:36:17.523560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.431 [2024-11-04 02:36:17.523566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:30.431 [2024-11-04 02:36:17.523574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:30.431 [2024-11-04 02:36:17.523580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.431 [2024-11-04 02:36:17.523586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:30.431 [2024-11-04 02:36:17.523593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:30.431 [2024-11-04 02:36:17.523599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.431 [2024-11-04 02:36:17.523605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:30.431 [2024-11-04 02:36:17.523612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:30.431 [2024-11-04 02:36:17.523619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.431 [2024-11-04 02:36:17.523626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:30.431 [2024-11-04 02:36:17.523632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:30.431 [2024-11-04 02:36:17.523639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.431 [2024-11-04 02:36:17.523655] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:30.431 [2024-11-04 02:36:17.523663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:30.431 [2024-11-04 02:36:17.523670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.431 [2024-11-04 02:36:17.523679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.431 [2024-11-04 02:36:17.523686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:30.431 [2024-11-04 02:36:17.523693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:30.431 [2024-11-04 02:36:17.523700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:30.431 [2024-11-04 02:36:17.523707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:30.431 [2024-11-04 02:36:17.523713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:30.431 [2024-11-04 02:36:17.523720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:30.431 [2024-11-04 02:36:17.523729] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:30.431 [2024-11-04 02:36:17.523739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.431 [2024-11-04 02:36:17.523747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:30.431 [2024-11-04 02:36:17.523755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:30.431 [2024-11-04 02:36:17.523762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:30.431 [2024-11-04 02:36:17.523769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:30.431 [2024-11-04 02:36:17.523776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:30.431 [2024-11-04 02:36:17.523785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:30.431 [2024-11-04 02:36:17.523792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:30.431 [2024-11-04 02:36:17.523799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:30.431 [2024-11-04 02:36:17.523806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:30.431 [2024-11-04 02:36:17.523812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:30.431 [2024-11-04 02:36:17.523820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:30.431 [2024-11-04 02:36:17.523827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:30.431 [2024-11-04 02:36:17.523835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:30.431 [2024-11-04 02:36:17.523842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:30.431 [2024-11-04 02:36:17.523849] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:30.431 [2024-11-04 02:36:17.523857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.431 [2024-11-04 02:36:17.524102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:30.431 [2024-11-04 02:36:17.524135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:30.431 [2024-11-04 02:36:17.524164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:30.431 [2024-11-04 02:36:17.524194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:30.431 [2024-11-04 02:36:17.524223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.431 [2024-11-04 02:36:17.524244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:30.431 [2024-11-04 02:36:17.524264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:25:30.431 [2024-11-04 02:36:17.524407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.692 [2024-11-04 02:36:17.550878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.692 [2024-11-04 02:36:17.551001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.692 [2024-11-04 02:36:17.551052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.404 ms 00:25:30.692 [2024-11-04 02:36:17.551076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.692 [2024-11-04 02:36:17.551167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.692 [2024-11-04 02:36:17.551194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:30.692 [2024-11-04 02:36:17.551213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:30.692 [2024-11-04 02:36:17.551232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.692 [2024-11-04 02:36:17.593272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.692 [2024-11-04 02:36:17.593415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:30.692 [2024-11-04 02:36:17.593473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.978 ms 00:25:30.692 [2024-11-04 02:36:17.593497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.692 [2024-11-04 02:36:17.593550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.692 [2024-11-04 02:36:17.593575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:30.692 [2024-11-04 02:36:17.593596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:30.692 [2024-11-04 02:36:17.593621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.692 [2024-11-04 02:36:17.594058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.692 [2024-11-04 02:36:17.594103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:30.692 [2024-11-04 02:36:17.594124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:25:30.692 [2024-11-04 02:36:17.594143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.692 [2024-11-04 02:36:17.594286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.692 [2024-11-04 02:36:17.594391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:30.692 [2024-11-04 02:36:17.594411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:30.692 [2024-11-04 02:36:17.594431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.692 [2024-11-04 02:36:17.608072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.692 [2024-11-04 02:36:17.608185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:30.692 [2024-11-04 02:36:17.608235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.556 ms 00:25:30.692 [2024-11-04 02:36:17.608262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.692 [2024-11-04 02:36:17.621385] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:30.692 [2024-11-04 02:36:17.621419] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:30.692 [2024-11-04 02:36:17.621430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.692 [2024-11-04 02:36:17.621438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:30.693 [2024-11-04 02:36:17.621448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.059 ms 00:25:30.693 [2024-11-04 02:36:17.621455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.645990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.646032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:30.693 [2024-11-04 02:36:17.646043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.495 ms 00:25:30.693 [2024-11-04 02:36:17.646051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.658415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.658470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:30.693 [2024-11-04 02:36:17.658480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.321 ms 00:25:30.693 [2024-11-04 02:36:17.658488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.670374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.670510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:30.693 [2024-11-04 02:36:17.670527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.848 ms 00:25:30.693 [2024-11-04 02:36:17.670534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.671154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.671177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:30.693 [2024-11-04 02:36:17.671187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:25:30.693 [2024-11-04 02:36:17.671194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.729119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.729300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:30.693 [2024-11-04 02:36:17.729320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.904 ms 00:25:30.693 [2024-11-04 02:36:17.729335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.739798] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:30.693 [2024-11-04 02:36:17.742219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.742252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:30.693 [2024-11-04 02:36:17.742264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.848 ms 00:25:30.693 [2024-11-04 02:36:17.742272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.742364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.742374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:30.693 [2024-11-04 02:36:17.742383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:30.693 [2024-11-04 02:36:17.742392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.743887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.743922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:30.693 [2024-11-04 02:36:17.743933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:25:30.693 [2024-11-04 02:36:17.743941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.743966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.743975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:30.693 [2024-11-04 02:36:17.743984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:30.693 [2024-11-04 02:36:17.743991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.744027] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:30.693 [2024-11-04 02:36:17.744040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.744049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:30.693 [2024-11-04 02:36:17.744058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:30.693 [2024-11-04 02:36:17.744066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.768597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.768639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:30.693 [2024-11-04 02:36:17.768651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.513 ms 00:25:30.693 [2024-11-04 02:36:17.768659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.768744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.693 [2024-11-04 02:36:17.768754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:30.693 [2024-11-04 02:36:17.768763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:30.693 [2024-11-04 02:36:17.768771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.693 [2024-11-04 02:36:17.769840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.619 ms, result 0 00:25:32.080  [2024-11-04T02:36:20.133Z] Copying: 1012/1048576 [kB] (1012 kBps) [2024-11-04T02:36:21.075Z] Copying: 4296/1048576 [kB] (3284 kBps) [2024-11-04T02:36:22.015Z] Copying: 14/1024 [MB] (10 MBps) [2024-11-04T02:36:22.958Z] Copying: 31/1024 [MB] (16 MBps) [2024-11-04T02:36:24.342Z] Copying: 46/1024 [MB] (15 MBps) [2024-11-04T02:36:25.284Z] Copying: 67/1024 [MB] (20 MBps) [2024-11-04T02:36:26.227Z] Copying: 85/1024 [MB] (18 MBps) [2024-11-04T02:36:27.170Z] Copying: 108/1024 [MB] (23 MBps) [2024-11-04T02:36:28.115Z] Copying: 136/1024 [MB] (27 MBps) [2024-11-04T02:36:29.058Z] Copying: 155/1024 [MB] (19 MBps) [2024-11-04T02:36:30.003Z] Copying: 181/1024 [MB] (26 MBps) [2024-11-04T02:36:31.389Z] Copying: 204/1024 [MB] (23 MBps) [2024-11-04T02:36:31.959Z] Copying: 235/1024 [MB] (30 MBps) [2024-11-04T02:36:33.372Z] Copying: 264/1024 [MB] (29 MBps) [2024-11-04T02:36:34.315Z] Copying: 280/1024 [MB] (15 MBps) [2024-11-04T02:36:35.254Z] Copying: 300/1024 [MB] (19 MBps) [2024-11-04T02:36:36.190Z] Copying: 317/1024 [MB] (17 MBps) [2024-11-04T02:36:37.132Z] Copying: 372/1024 [MB] (54 MBps) [2024-11-04T02:36:38.076Z] Copying: 399/1024 [MB] (26 MBps) [2024-11-04T02:36:39.020Z] Copying: 417/1024 [MB] (18 MBps) [2024-11-04T02:36:39.957Z] Copying: 433/1024 [MB] (16 MBps) [2024-11-04T02:36:41.378Z] Copying: 472/1024 [MB] (38 MBps) [2024-11-04T02:36:42.322Z] Copying: 487/1024 [MB] (15 MBps) [2024-11-04T02:36:43.265Z] Copying: 508/1024 [MB] (21 MBps) [2024-11-04T02:36:44.206Z] Copying: 528/1024 [MB] (19 MBps) [2024-11-04T02:36:45.148Z] Copying: 560/1024 [MB] (31 MBps) [2024-11-04T02:36:46.088Z] Copying: 590/1024 [MB] (29 MBps) [2024-11-04T02:36:47.029Z] Copying: 622/1024 [MB] (32 MBps) [2024-11-04T02:36:47.970Z] Copying: 655/1024 [MB] (32 MBps) [2024-11-04T02:36:49.357Z] Copying: 681/1024 [MB] (25 MBps) [2024-11-04T02:36:50.301Z] Copying: 709/1024 [MB] (28 MBps) [2024-11-04T02:36:51.240Z] Copying: 745/1024 [MB] (35 MBps) [2024-11-04T02:36:52.182Z] Copying: 773/1024 [MB] (27 MBps) [2024-11-04T02:36:53.125Z] Copying: 807/1024 [MB] (33 MBps) [2024-11-04T02:36:54.066Z] Copying: 836/1024 [MB] (29 MBps) [2024-11-04T02:36:55.006Z] Copying: 863/1024 [MB] (26 MBps) [2024-11-04T02:36:56.390Z] Copying: 893/1024 [MB] (30 MBps) [2024-11-04T02:36:56.963Z] Copying: 923/1024 [MB] (29 MBps) [2024-11-04T02:36:58.377Z] Copying: 943/1024 [MB] (20 MBps) [2024-11-04T02:36:59.320Z] Copying: 972/1024 [MB] (29 MBps) [2024-11-04T02:36:59.892Z] Copying: 999/1024 [MB] (26 MBps) [2024-11-04T02:37:00.465Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-04 02:37:00.163800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.163918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:13.354 [2024-11-04 02:37:00.163946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:13.354 [2024-11-04 02:37:00.163958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.163989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:13.354 [2024-11-04 02:37:00.168539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.168591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:13.354 [2024-11-04 02:37:00.168605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 00:26:13.354 [2024-11-04 02:37:00.168615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.168901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.168916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:13.354 [2024-11-04 02:37:00.168927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:26:13.354 [2024-11-04 02:37:00.168940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.183111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.183308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:13.354 [2024-11-04 02:37:00.183334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.150 ms 00:26:13.354 [2024-11-04 02:37:00.183343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.189628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.189672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:13.354 [2024-11-04 02:37:00.189685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.246 ms 00:26:13.354 [2024-11-04 02:37:00.189702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.216243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.216289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:13.354 [2024-11-04 02:37:00.216302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.476 ms 00:26:13.354 [2024-11-04 02:37:00.216310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.232174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.232222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:13.354 [2024-11-04 02:37:00.232236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.817 ms 00:26:13.354 [2024-11-04 02:37:00.232245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.236792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.236841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:13.354 [2024-11-04 02:37:00.236853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.494 ms 00:26:13.354 [2024-11-04 02:37:00.236862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.262471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.262515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:13.354 [2024-11-04 02:37:00.262527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.571 ms 00:26:13.354 [2024-11-04 02:37:00.262535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.287701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.287745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:13.354 [2024-11-04 02:37:00.287771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.121 ms 00:26:13.354 [2024-11-04 02:37:00.287779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.312820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.312885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:13.354 [2024-11-04 02:37:00.312898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.995 ms 00:26:13.354 [2024-11-04 02:37:00.312906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.337776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.354 [2024-11-04 02:37:00.337821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:13.354 [2024-11-04 02:37:00.337833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.777 ms 00:26:13.354 [2024-11-04 02:37:00.337841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.354 [2024-11-04 02:37:00.337913] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:13.354 [2024-11-04 02:37:00.337930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:13.354 [2024-11-04 02:37:00.337942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:13.354 [2024-11-04 02:37:00.337951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.337959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.337968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.337977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.337986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.337994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:13.354 [2024-11-04 02:37:00.338309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:13.355 [2024-11-04 02:37:00.338782] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:13.355 [2024-11-04 02:37:00.338794] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 51631818-2bcf-469a-a6d7-de5c91fa5a94 00:26:13.355 [2024-11-04 02:37:00.338802] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:13.355 [2024-11-04 02:37:00.338809] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158912 00:26:13.355 [2024-11-04 02:37:00.338817] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156928 00:26:13.355 [2024-11-04 02:37:00.338826] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0126 00:26:13.355 [2024-11-04 02:37:00.338840] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:13.355 [2024-11-04 02:37:00.338850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:13.355 [2024-11-04 02:37:00.338859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:13.355 [2024-11-04 02:37:00.338884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:13.355 [2024-11-04 02:37:00.338891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:13.355 [2024-11-04 02:37:00.338899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.355 [2024-11-04 02:37:00.338908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:13.355 [2024-11-04 02:37:00.338917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:26:13.355 [2024-11-04 02:37:00.338926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.355 [2024-11-04 02:37:00.352476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.355 [2024-11-04 02:37:00.352653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:13.355 [2024-11-04 02:37:00.352679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.531 ms 00:26:13.355 [2024-11-04 02:37:00.352688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.355 [2024-11-04 02:37:00.353123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.355 [2024-11-04 02:37:00.353138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:13.355 [2024-11-04 02:37:00.353148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:26:13.355 [2024-11-04 02:37:00.353156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.355 [2024-11-04 02:37:00.389539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.355 [2024-11-04 02:37:00.389587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:13.355 [2024-11-04 02:37:00.389599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.355 [2024-11-04 02:37:00.389609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.355 [2024-11-04 02:37:00.389672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.355 [2024-11-04 02:37:00.389682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:13.355 [2024-11-04 02:37:00.389692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.355 [2024-11-04 02:37:00.389701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.355 [2024-11-04 02:37:00.389785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.355 [2024-11-04 02:37:00.389803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:13.355 [2024-11-04 02:37:00.389814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.355 [2024-11-04 02:37:00.389823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.355 [2024-11-04 02:37:00.389840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.355 [2024-11-04 02:37:00.389850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:13.355 [2024-11-04 02:37:00.389859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.355 [2024-11-04 02:37:00.389898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.616 [2024-11-04 02:37:00.475572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.616 [2024-11-04 02:37:00.475816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:13.616 [2024-11-04 02:37:00.475839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.616 [2024-11-04 02:37:00.475848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.616 [2024-11-04 02:37:00.546119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.616 [2024-11-04 02:37:00.546359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:13.616 [2024-11-04 02:37:00.546381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.616 [2024-11-04 02:37:00.546392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.616 [2024-11-04 02:37:00.546458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.616 [2024-11-04 02:37:00.546470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:13.616 [2024-11-04 02:37:00.546480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.616 [2024-11-04 02:37:00.546496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.616 [2024-11-04 02:37:00.546555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.616 [2024-11-04 02:37:00.546566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:13.616 [2024-11-04 02:37:00.546575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.616 [2024-11-04 02:37:00.546585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.616 [2024-11-04 02:37:00.546699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.616 [2024-11-04 02:37:00.546711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:13.616 [2024-11-04 02:37:00.546720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.616 [2024-11-04 02:37:00.546733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.616 [2024-11-04 02:37:00.546767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.616 [2024-11-04 02:37:00.546778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:13.616 [2024-11-04 02:37:00.546787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.616 [2024-11-04 02:37:00.546796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.616 [2024-11-04 02:37:00.546838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.616 [2024-11-04 02:37:00.546852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:13.616 [2024-11-04 02:37:00.546861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.616 [2024-11-04 02:37:00.546901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.616 [2024-11-04 02:37:00.546956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.616 [2024-11-04 02:37:00.546969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:13.616 [2024-11-04 02:37:00.546979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.616 [2024-11-04 02:37:00.546986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.616 [2024-11-04 02:37:00.547125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 383.295 ms, result 0 00:26:14.188 00:26:14.188 00:26:14.447 02:37:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:16.357 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:16.357 02:37:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:16.617 [2024-11-04 02:37:03.518966] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:26:16.617 [2024-11-04 02:37:03.519052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79483 ] 00:26:16.617 [2024-11-04 02:37:03.674075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.877 [2024-11-04 02:37:03.778630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.139 [2024-11-04 02:37:04.070110] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:17.139 [2024-11-04 02:37:04.070190] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:17.139 [2024-11-04 02:37:04.230546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.139 [2024-11-04 02:37:04.230609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:17.139 [2024-11-04 02:37:04.230628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:17.139 [2024-11-04 02:37:04.230637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.139 [2024-11-04 02:37:04.230689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.139 [2024-11-04 02:37:04.230700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.139 [2024-11-04 02:37:04.230712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:17.139 [2024-11-04 02:37:04.230720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.139 [2024-11-04 02:37:04.230740] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:17.139 [2024-11-04 02:37:04.231614] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:17.139 [2024-11-04 02:37:04.231685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.139 [2024-11-04 02:37:04.231695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.139 [2024-11-04 02:37:04.231706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:26:17.139 [2024-11-04 02:37:04.231714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.139 [2024-11-04 02:37:04.233499] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:17.139 [2024-11-04 02:37:04.247817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.139 [2024-11-04 02:37:04.248069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:17.139 [2024-11-04 02:37:04.248093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.320 ms 00:26:17.139 [2024-11-04 02:37:04.248102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.139 [2024-11-04 02:37:04.248267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.139 [2024-11-04 02:37:04.248300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:17.139 [2024-11-04 02:37:04.248311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:17.139 [2024-11-04 02:37:04.248320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.400 [2024-11-04 02:37:04.256470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.400 [2024-11-04 02:37:04.256514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.400 [2024-11-04 02:37:04.256526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.066 ms 00:26:17.400 [2024-11-04 02:37:04.256535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.400 [2024-11-04 02:37:04.256623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.400 [2024-11-04 02:37:04.256632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.401 [2024-11-04 02:37:04.256641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:17.401 [2024-11-04 02:37:04.256650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.401 [2024-11-04 02:37:04.256694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.401 [2024-11-04 02:37:04.256706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:17.401 [2024-11-04 02:37:04.256715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:17.401 [2024-11-04 02:37:04.256723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.401 [2024-11-04 02:37:04.256746] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:17.401 [2024-11-04 02:37:04.260962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.401 [2024-11-04 02:37:04.261001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.401 [2024-11-04 02:37:04.261012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:26:17.401 [2024-11-04 02:37:04.261024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.401 [2024-11-04 02:37:04.261059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.401 [2024-11-04 02:37:04.261067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:17.401 [2024-11-04 02:37:04.261076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:17.401 [2024-11-04 02:37:04.261084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.401 [2024-11-04 02:37:04.261139] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:17.401 [2024-11-04 02:37:04.261160] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:17.401 [2024-11-04 02:37:04.261198] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:17.401 [2024-11-04 02:37:04.261219] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:17.401 [2024-11-04 02:37:04.261327] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:17.401 [2024-11-04 02:37:04.261341] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:17.401 [2024-11-04 02:37:04.261352] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:17.401 [2024-11-04 02:37:04.261363] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:17.401 [2024-11-04 02:37:04.261372] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:17.401 [2024-11-04 02:37:04.261380] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:17.401 [2024-11-04 02:37:04.261388] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:17.401 [2024-11-04 02:37:04.261396] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:17.401 [2024-11-04 02:37:04.261406] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:17.401 [2024-11-04 02:37:04.261418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.401 [2024-11-04 02:37:04.261427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:17.401 [2024-11-04 02:37:04.261440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:26:17.401 [2024-11-04 02:37:04.261448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.401 [2024-11-04 02:37:04.261532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.401 [2024-11-04 02:37:04.261542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:17.401 [2024-11-04 02:37:04.261551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:17.401 [2024-11-04 02:37:04.261560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.401 [2024-11-04 02:37:04.261663] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:17.401 [2024-11-04 02:37:04.261678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:17.401 [2024-11-04 02:37:04.261687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.401 [2024-11-04 02:37:04.261695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.401 [2024-11-04 02:37:04.261705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:17.401 [2024-11-04 02:37:04.261713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:17.401 [2024-11-04 02:37:04.261721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:17.401 [2024-11-04 02:37:04.261730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:17.401 [2024-11-04 02:37:04.261738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:17.401 [2024-11-04 02:37:04.261745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.401 [2024-11-04 02:37:04.261752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:17.401 [2024-11-04 02:37:04.261760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:17.401 [2024-11-04 02:37:04.261769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.401 [2024-11-04 02:37:04.261776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:17.401 [2024-11-04 02:37:04.261786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:17.401 [2024-11-04 02:37:04.261799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.401 [2024-11-04 02:37:04.261807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:17.401 [2024-11-04 02:37:04.261814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:17.401 [2024-11-04 02:37:04.261820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.401 [2024-11-04 02:37:04.261828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:17.401 [2024-11-04 02:37:04.261837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:17.401 [2024-11-04 02:37:04.261843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.401 [2024-11-04 02:37:04.261849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:17.401 [2024-11-04 02:37:04.261856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:17.401 [2024-11-04 02:37:04.261900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.401 [2024-11-04 02:37:04.261908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:17.401 [2024-11-04 02:37:04.261916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:17.401 [2024-11-04 02:37:04.261925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.401 [2024-11-04 02:37:04.261932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:17.401 [2024-11-04 02:37:04.261939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:17.401 [2024-11-04 02:37:04.261946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.401 [2024-11-04 02:37:04.261953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:17.401 [2024-11-04 02:37:04.261960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:17.401 [2024-11-04 02:37:04.261968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.401 [2024-11-04 02:37:04.261976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:17.401 [2024-11-04 02:37:04.261982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:17.401 [2024-11-04 02:37:04.261989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.401 [2024-11-04 02:37:04.261996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:17.401 [2024-11-04 02:37:04.262003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:17.401 [2024-11-04 02:37:04.262011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.401 [2024-11-04 02:37:04.262017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:17.401 [2024-11-04 02:37:04.262024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:17.401 [2024-11-04 02:37:04.262033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.401 [2024-11-04 02:37:04.262041] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:17.401 [2024-11-04 02:37:04.262050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:17.401 [2024-11-04 02:37:04.262058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.401 [2024-11-04 02:37:04.262067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.401 [2024-11-04 02:37:04.262076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:17.401 [2024-11-04 02:37:04.262084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:17.401 [2024-11-04 02:37:04.262091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:17.401 [2024-11-04 02:37:04.262098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:17.401 [2024-11-04 02:37:04.262104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:17.401 [2024-11-04 02:37:04.262111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:17.401 [2024-11-04 02:37:04.262121] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:17.401 [2024-11-04 02:37:04.262132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.401 [2024-11-04 02:37:04.262140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:17.401 [2024-11-04 02:37:04.262148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:17.401 [2024-11-04 02:37:04.262155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:17.401 [2024-11-04 02:37:04.262162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:17.401 [2024-11-04 02:37:04.262171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:17.401 [2024-11-04 02:37:04.262179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:17.401 [2024-11-04 02:37:04.262186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:17.401 [2024-11-04 02:37:04.262193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:17.401 [2024-11-04 02:37:04.262210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:17.401 [2024-11-04 02:37:04.262217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:17.402 [2024-11-04 02:37:04.262225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:17.402 [2024-11-04 02:37:04.262232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:17.402 [2024-11-04 02:37:04.262240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:17.402 [2024-11-04 02:37:04.262247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:17.402 [2024-11-04 02:37:04.262254] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:17.402 [2024-11-04 02:37:04.262263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.402 [2024-11-04 02:37:04.262277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:17.402 [2024-11-04 02:37:04.262285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:17.402 [2024-11-04 02:37:04.262292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:17.402 [2024-11-04 02:37:04.262300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:17.402 [2024-11-04 02:37:04.262307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.262315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:17.402 [2024-11-04 02:37:04.262323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:26:17.402 [2024-11-04 02:37:04.262339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.294551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.294602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.402 [2024-11-04 02:37:04.294616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.165 ms 00:26:17.402 [2024-11-04 02:37:04.294626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.294717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.294731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:17.402 [2024-11-04 02:37:04.294740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:17.402 [2024-11-04 02:37:04.294748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.346226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.346363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.402 [2024-11-04 02:37:04.346383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.419 ms 00:26:17.402 [2024-11-04 02:37:04.346391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.346432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.346441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.402 [2024-11-04 02:37:04.346450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:17.402 [2024-11-04 02:37:04.346462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.346837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.346853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.402 [2024-11-04 02:37:04.346887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:26:17.402 [2024-11-04 02:37:04.346898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.347021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.347031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.402 [2024-11-04 02:37:04.347040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:26:17.402 [2024-11-04 02:37:04.347047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.360194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.360223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.402 [2024-11-04 02:37:04.360233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.123 ms 00:26:17.402 [2024-11-04 02:37:04.360243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.373121] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:17.402 [2024-11-04 02:37:04.373256] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:17.402 [2024-11-04 02:37:04.373271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.373279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:17.402 [2024-11-04 02:37:04.373289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.936 ms 00:26:17.402 [2024-11-04 02:37:04.373296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.400574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.400717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:17.402 [2024-11-04 02:37:04.400735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.244 ms 00:26:17.402 [2024-11-04 02:37:04.400743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.412568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.412603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:17.402 [2024-11-04 02:37:04.412614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.788 ms 00:26:17.402 [2024-11-04 02:37:04.412621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.423954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.423989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:17.402 [2024-11-04 02:37:04.423999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.295 ms 00:26:17.402 [2024-11-04 02:37:04.424006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.424613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.424637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:17.402 [2024-11-04 02:37:04.424647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:26:17.402 [2024-11-04 02:37:04.424655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.484607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.484666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:17.402 [2024-11-04 02:37:04.484681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.929 ms 00:26:17.402 [2024-11-04 02:37:04.484698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.496036] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:17.402 [2024-11-04 02:37:04.499005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.499191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:17.402 [2024-11-04 02:37:04.499210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.252 ms 00:26:17.402 [2024-11-04 02:37:04.499219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.499332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.499345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:17.402 [2024-11-04 02:37:04.499357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:17.402 [2024-11-04 02:37:04.499366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.500271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.500313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:17.402 [2024-11-04 02:37:04.500326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:26:17.402 [2024-11-04 02:37:04.500337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.500370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.500380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:17.402 [2024-11-04 02:37:04.500390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:17.402 [2024-11-04 02:37:04.500400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.402 [2024-11-04 02:37:04.500444] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:17.402 [2024-11-04 02:37:04.500458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.402 [2024-11-04 02:37:04.500469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:17.402 [2024-11-04 02:37:04.500479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:17.402 [2024-11-04 02:37:04.500490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.664 [2024-11-04 02:37:04.526720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.664 [2024-11-04 02:37:04.526928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:17.664 [2024-11-04 02:37:04.526951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.209 ms 00:26:17.665 [2024-11-04 02:37:04.526961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.665 [2024-11-04 02:37:04.527463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.665 [2024-11-04 02:37:04.527545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:17.665 [2024-11-04 02:37:04.527579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:26:17.665 [2024-11-04 02:37:04.527605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.665 [2024-11-04 02:37:04.530176] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.409 ms, result 0 00:26:18.607  [2024-11-04T02:37:07.102Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-04T02:37:08.044Z] Copying: 44/1024 [MB] (21 MBps) [2024-11-04T02:37:08.986Z] Copying: 64/1024 [MB] (19 MBps) [2024-11-04T02:37:09.927Z] Copying: 81/1024 [MB] (17 MBps) [2024-11-04T02:37:10.868Z] Copying: 91/1024 [MB] (10 MBps) [2024-11-04T02:37:11.809Z] Copying: 109/1024 [MB] (17 MBps) [2024-11-04T02:37:12.754Z] Copying: 130/1024 [MB] (21 MBps) [2024-11-04T02:37:13.740Z] Copying: 145/1024 [MB] (15 MBps) [2024-11-04T02:37:15.143Z] Copying: 165/1024 [MB] (20 MBps) [2024-11-04T02:37:15.716Z] Copying: 180/1024 [MB] (14 MBps) [2024-11-04T02:37:17.106Z] Copying: 199/1024 [MB] (18 MBps) [2024-11-04T02:37:18.051Z] Copying: 213/1024 [MB] (14 MBps) [2024-11-04T02:37:18.994Z] Copying: 227/1024 [MB] (13 MBps) [2024-11-04T02:37:19.937Z] Copying: 238/1024 [MB] (10 MBps) [2024-11-04T02:37:20.879Z] Copying: 249/1024 [MB] (10 MBps) [2024-11-04T02:37:21.846Z] Copying: 271/1024 [MB] (22 MBps) [2024-11-04T02:37:22.791Z] Copying: 284/1024 [MB] (12 MBps) [2024-11-04T02:37:23.736Z] Copying: 296/1024 [MB] (12 MBps) [2024-11-04T02:37:25.122Z] Copying: 307/1024 [MB] (10 MBps) [2024-11-04T02:37:26.076Z] Copying: 319/1024 [MB] (12 MBps) [2024-11-04T02:37:27.020Z] Copying: 342/1024 [MB] (23 MBps) [2024-11-04T02:37:27.965Z] Copying: 354/1024 [MB] (12 MBps) [2024-11-04T02:37:28.910Z] Copying: 368/1024 [MB] (13 MBps) [2024-11-04T02:37:29.852Z] Copying: 387/1024 [MB] (19 MBps) [2024-11-04T02:37:30.801Z] Copying: 404/1024 [MB] (16 MBps) [2024-11-04T02:37:31.772Z] Copying: 425/1024 [MB] (21 MBps) [2024-11-04T02:37:32.716Z] Copying: 436/1024 [MB] (11 MBps) [2024-11-04T02:37:34.105Z] Copying: 447/1024 [MB] (10 MBps) [2024-11-04T02:37:35.048Z] Copying: 464/1024 [MB] (17 MBps) [2024-11-04T02:37:35.992Z] Copying: 482/1024 [MB] (18 MBps) [2024-11-04T02:37:36.935Z] Copying: 499/1024 [MB] (17 MBps) [2024-11-04T02:37:37.879Z] Copying: 516/1024 [MB] (16 MBps) [2024-11-04T02:37:38.823Z] Copying: 539/1024 [MB] (22 MBps) [2024-11-04T02:37:39.768Z] Copying: 559/1024 [MB] (19 MBps) [2024-11-04T02:37:41.151Z] Copying: 577/1024 [MB] (18 MBps) [2024-11-04T02:37:41.728Z] Copying: 596/1024 [MB] (18 MBps) [2024-11-04T02:37:43.115Z] Copying: 617/1024 [MB] (20 MBps) [2024-11-04T02:37:44.059Z] Copying: 628/1024 [MB] (10 MBps) [2024-11-04T02:37:45.005Z] Copying: 642/1024 [MB] (14 MBps) [2024-11-04T02:37:45.946Z] Copying: 653/1024 [MB] (11 MBps) [2024-11-04T02:37:46.890Z] Copying: 666/1024 [MB] (13 MBps) [2024-11-04T02:37:47.834Z] Copying: 685/1024 [MB] (18 MBps) [2024-11-04T02:37:48.795Z] Copying: 695/1024 [MB] (10 MBps) [2024-11-04T02:37:49.745Z] Copying: 706/1024 [MB] (10 MBps) [2024-11-04T02:37:51.133Z] Copying: 720/1024 [MB] (14 MBps) [2024-11-04T02:37:52.076Z] Copying: 737/1024 [MB] (16 MBps) [2024-11-04T02:37:53.018Z] Copying: 751/1024 [MB] (14 MBps) [2024-11-04T02:37:53.977Z] Copying: 768/1024 [MB] (16 MBps) [2024-11-04T02:37:54.923Z] Copying: 779/1024 [MB] (10 MBps) [2024-11-04T02:37:55.868Z] Copying: 794/1024 [MB] (14 MBps) [2024-11-04T02:37:56.813Z] Copying: 804/1024 [MB] (10 MBps) [2024-11-04T02:37:57.758Z] Copying: 820/1024 [MB] (16 MBps) [2024-11-04T02:37:59.143Z] Copying: 831/1024 [MB] (10 MBps) [2024-11-04T02:37:59.711Z] Copying: 842/1024 [MB] (11 MBps) [2024-11-04T02:38:01.097Z] Copying: 864/1024 [MB] (21 MBps) [2024-11-04T02:38:02.038Z] Copying: 879/1024 [MB] (15 MBps) [2024-11-04T02:38:02.998Z] Copying: 896/1024 [MB] (17 MBps) [2024-11-04T02:38:03.948Z] Copying: 926/1024 [MB] (29 MBps) [2024-11-04T02:38:04.893Z] Copying: 945/1024 [MB] (18 MBps) [2024-11-04T02:38:05.876Z] Copying: 962/1024 [MB] (16 MBps) [2024-11-04T02:38:06.818Z] Copying: 973/1024 [MB] (11 MBps) [2024-11-04T02:38:07.761Z] Copying: 989/1024 [MB] (16 MBps) [2024-11-04T02:38:09.142Z] Copying: 1001/1024 [MB] (11 MBps) [2024-11-04T02:38:09.142Z] Copying: 1014/1024 [MB] (13 MBps) [2024-11-04T02:38:09.404Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-04 02:38:09.193346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.193450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:22.293 [2024-11-04 02:38:09.193482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:22.293 [2024-11-04 02:38:09.193504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.193556] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:22.293 [2024-11-04 02:38:09.201053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.201151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:22.293 [2024-11-04 02:38:09.201215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.464 ms 00:27:22.293 [2024-11-04 02:38:09.201244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.201535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.201608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:22.293 [2024-11-04 02:38:09.201662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:27:22.293 [2024-11-04 02:38:09.201685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.205171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.205245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:22.293 [2024-11-04 02:38:09.205296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.436 ms 00:27:22.293 [2024-11-04 02:38:09.205347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.211583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.211688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:22.293 [2024-11-04 02:38:09.211792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.199 ms 00:27:22.293 [2024-11-04 02:38:09.211885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.236124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.236236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:22.293 [2024-11-04 02:38:09.236285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.139 ms 00:27:22.293 [2024-11-04 02:38:09.236307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.250238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.250342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:22.293 [2024-11-04 02:38:09.250389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.807 ms 00:27:22.293 [2024-11-04 02:38:09.250411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.254719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.254842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:22.293 [2024-11-04 02:38:09.254937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:27:22.293 [2024-11-04 02:38:09.254962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.278962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.279073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:22.293 [2024-11-04 02:38:09.279124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.970 ms 00:27:22.293 [2024-11-04 02:38:09.279146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.302572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.302688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:22.293 [2024-11-04 02:38:09.302735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.388 ms 00:27:22.293 [2024-11-04 02:38:09.302757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.325929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.326064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:22.293 [2024-11-04 02:38:09.326123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.876 ms 00:27:22.293 [2024-11-04 02:38:09.326146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.348909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.293 [2024-11-04 02:38:09.349031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:22.293 [2024-11-04 02:38:09.349082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.648 ms 00:27:22.293 [2024-11-04 02:38:09.349103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.293 [2024-11-04 02:38:09.349369] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:22.293 [2024-11-04 02:38:09.349448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:22.293 [2024-11-04 02:38:09.349555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:22.293 [2024-11-04 02:38:09.349587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:22.293 [2024-11-04 02:38:09.349616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:22.293 [2024-11-04 02:38:09.349646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.349676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.349704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.349733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.349795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.349826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.349900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.349936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.349966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.350994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:22.294 [2024-11-04 02:38:09.351972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:22.295 [2024-11-04 02:38:09.351982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:22.295 [2024-11-04 02:38:09.351990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:22.295 [2024-11-04 02:38:09.351997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:22.295 [2024-11-04 02:38:09.352005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:22.295 [2024-11-04 02:38:09.352012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:22.295 [2024-11-04 02:38:09.352021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:22.295 [2024-11-04 02:38:09.352037] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:22.295 [2024-11-04 02:38:09.352045] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 51631818-2bcf-469a-a6d7-de5c91fa5a94 00:27:22.295 [2024-11-04 02:38:09.352059] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:22.295 [2024-11-04 02:38:09.352066] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:22.295 [2024-11-04 02:38:09.352073] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:22.295 [2024-11-04 02:38:09.352081] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:22.295 [2024-11-04 02:38:09.352090] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:22.295 [2024-11-04 02:38:09.352098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:22.295 [2024-11-04 02:38:09.352111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:22.295 [2024-11-04 02:38:09.352117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:22.295 [2024-11-04 02:38:09.352124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:22.295 [2024-11-04 02:38:09.352133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.295 [2024-11-04 02:38:09.352141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:22.295 [2024-11-04 02:38:09.352150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:27:22.295 [2024-11-04 02:38:09.352159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.295 [2024-11-04 02:38:09.364855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.295 [2024-11-04 02:38:09.364901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:22.295 [2024-11-04 02:38:09.364912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.668 ms 00:27:22.295 [2024-11-04 02:38:09.364921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.295 [2024-11-04 02:38:09.365304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.295 [2024-11-04 02:38:09.365317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:22.295 [2024-11-04 02:38:09.365326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:27:22.295 [2024-11-04 02:38:09.365338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.295 [2024-11-04 02:38:09.400462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.295 [2024-11-04 02:38:09.400648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.295 [2024-11-04 02:38:09.400669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.295 [2024-11-04 02:38:09.400680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.295 [2024-11-04 02:38:09.400741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.295 [2024-11-04 02:38:09.400752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.295 [2024-11-04 02:38:09.400762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.295 [2024-11-04 02:38:09.400778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.295 [2024-11-04 02:38:09.400859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.295 [2024-11-04 02:38:09.400895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.295 [2024-11-04 02:38:09.400904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.295 [2024-11-04 02:38:09.400911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.295 [2024-11-04 02:38:09.400927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.295 [2024-11-04 02:38:09.400938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.295 [2024-11-04 02:38:09.400946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.295 [2024-11-04 02:38:09.400954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.556 [2024-11-04 02:38:09.485699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.556 [2024-11-04 02:38:09.485755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.556 [2024-11-04 02:38:09.485769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.556 [2024-11-04 02:38:09.485779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.556 [2024-11-04 02:38:09.555548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.556 [2024-11-04 02:38:09.555780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.556 [2024-11-04 02:38:09.555802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.556 [2024-11-04 02:38:09.555819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.556 [2024-11-04 02:38:09.555906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.556 [2024-11-04 02:38:09.555917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:22.556 [2024-11-04 02:38:09.555927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.556 [2024-11-04 02:38:09.555935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.556 [2024-11-04 02:38:09.555992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.556 [2024-11-04 02:38:09.556003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:22.556 [2024-11-04 02:38:09.556013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.556 [2024-11-04 02:38:09.556022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.556 [2024-11-04 02:38:09.556134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.556 [2024-11-04 02:38:09.556145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:22.556 [2024-11-04 02:38:09.556156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.556 [2024-11-04 02:38:09.556165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.556 [2024-11-04 02:38:09.556203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.556 [2024-11-04 02:38:09.556213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:22.556 [2024-11-04 02:38:09.556221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.556 [2024-11-04 02:38:09.556229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.556 [2024-11-04 02:38:09.556274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.556 [2024-11-04 02:38:09.556285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:22.556 [2024-11-04 02:38:09.556294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.556 [2024-11-04 02:38:09.556303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.556 [2024-11-04 02:38:09.556351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.556 [2024-11-04 02:38:09.556363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:22.556 [2024-11-04 02:38:09.556373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.556 [2024-11-04 02:38:09.556381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.556 [2024-11-04 02:38:09.556520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 363.164 ms, result 0 00:27:23.498 00:27:23.498 00:27:23.498 02:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:26.046 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:26.046 Process with pid 77559 is not found 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77559 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 77559 ']' 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 77559 00:27:26.046 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (77559) - No such process 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 77559 is not found' 00:27:26.046 02:38:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:26.046 Remove shared memory files 00:27:26.046 02:38:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:26.046 02:38:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:26.046 02:38:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:26.046 02:38:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:26.046 02:38:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:26.046 02:38:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:26.046 02:38:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:26.046 ************************************ 00:27:26.046 END TEST ftl_dirty_shutdown 00:27:26.046 ************************************ 00:27:26.046 00:27:26.046 real 4m10.427s 00:27:26.046 user 4m37.841s 00:27:26.046 sys 0m26.259s 00:27:26.046 02:38:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:26.046 02:38:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:26.308 02:38:13 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:26.308 02:38:13 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:26.308 02:38:13 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:26.308 02:38:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:26.308 ************************************ 00:27:26.308 START TEST ftl_upgrade_shutdown 00:27:26.308 ************************************ 00:27:26.308 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:26.308 * Looking for test storage... 00:27:26.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:26.308 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:26.308 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:27:26.308 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:26.308 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:26.308 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:26.308 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:26.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.309 --rc genhtml_branch_coverage=1 00:27:26.309 --rc genhtml_function_coverage=1 00:27:26.309 --rc genhtml_legend=1 00:27:26.309 --rc geninfo_all_blocks=1 00:27:26.309 --rc geninfo_unexecuted_blocks=1 00:27:26.309 00:27:26.309 ' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:26.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.309 --rc genhtml_branch_coverage=1 00:27:26.309 --rc genhtml_function_coverage=1 00:27:26.309 --rc genhtml_legend=1 00:27:26.309 --rc geninfo_all_blocks=1 00:27:26.309 --rc geninfo_unexecuted_blocks=1 00:27:26.309 00:27:26.309 ' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:26.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.309 --rc genhtml_branch_coverage=1 00:27:26.309 --rc genhtml_function_coverage=1 00:27:26.309 --rc genhtml_legend=1 00:27:26.309 --rc geninfo_all_blocks=1 00:27:26.309 --rc geninfo_unexecuted_blocks=1 00:27:26.309 00:27:26.309 ' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:26.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.309 --rc genhtml_branch_coverage=1 00:27:26.309 --rc genhtml_function_coverage=1 00:27:26.309 --rc genhtml_legend=1 00:27:26.309 --rc geninfo_all_blocks=1 00:27:26.309 --rc geninfo_unexecuted_blocks=1 00:27:26.309 00:27:26.309 ' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80258 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80258 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80258 ']' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:26.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:26.309 02:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:26.571 [2024-11-04 02:38:13.499596] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:27:26.571 [2024-11-04 02:38:13.500000] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80258 ] 00:27:26.571 [2024-11-04 02:38:13.665431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.832 [2024-11-04 02:38:13.763330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:27.404 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:27.677 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:27.677 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:27.677 02:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:27.677 02:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:27:27.677 02:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:27.677 02:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:27.677 02:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:27.677 02:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:27.939 02:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:27.939 { 00:27:27.939 "name": "basen1", 00:27:27.939 "aliases": [ 00:27:27.939 "bbac79cd-d9f8-412f-b01b-ae922285ea65" 00:27:27.939 ], 00:27:27.939 "product_name": "NVMe disk", 00:27:27.939 "block_size": 4096, 00:27:27.939 "num_blocks": 1310720, 00:27:27.939 "uuid": "bbac79cd-d9f8-412f-b01b-ae922285ea65", 00:27:27.939 "numa_id": -1, 00:27:27.939 "assigned_rate_limits": { 00:27:27.939 "rw_ios_per_sec": 0, 00:27:27.939 "rw_mbytes_per_sec": 0, 00:27:27.939 "r_mbytes_per_sec": 0, 00:27:27.939 "w_mbytes_per_sec": 0 00:27:27.939 }, 00:27:27.939 "claimed": true, 00:27:27.939 "claim_type": "read_many_write_one", 00:27:27.939 "zoned": false, 00:27:27.939 "supported_io_types": { 00:27:27.939 "read": true, 00:27:27.939 "write": true, 00:27:27.939 "unmap": true, 00:27:27.939 "flush": true, 00:27:27.939 "reset": true, 00:27:27.939 "nvme_admin": true, 00:27:27.939 "nvme_io": true, 00:27:27.939 "nvme_io_md": false, 00:27:27.939 "write_zeroes": true, 00:27:27.939 "zcopy": false, 00:27:27.939 "get_zone_info": false, 00:27:27.939 "zone_management": false, 00:27:27.939 "zone_append": false, 00:27:27.939 "compare": true, 00:27:27.939 "compare_and_write": false, 00:27:27.939 "abort": true, 00:27:27.939 "seek_hole": false, 00:27:27.939 "seek_data": false, 00:27:27.939 "copy": true, 00:27:27.939 "nvme_iov_md": false 00:27:27.939 }, 00:27:27.939 "driver_specific": { 00:27:27.939 "nvme": [ 00:27:27.939 { 00:27:27.939 "pci_address": "0000:00:11.0", 00:27:27.939 "trid": { 00:27:27.939 "trtype": "PCIe", 00:27:27.939 "traddr": "0000:00:11.0" 00:27:27.939 }, 00:27:27.939 "ctrlr_data": { 00:27:27.939 "cntlid": 0, 00:27:27.939 "vendor_id": "0x1b36", 00:27:27.939 "model_number": "QEMU NVMe Ctrl", 00:27:27.939 "serial_number": "12341", 00:27:27.939 "firmware_revision": "8.0.0", 00:27:27.939 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:27.939 "oacs": { 00:27:27.939 "security": 0, 00:27:27.939 "format": 1, 00:27:27.939 "firmware": 0, 00:27:27.939 "ns_manage": 1 00:27:27.939 }, 00:27:27.939 "multi_ctrlr": false, 00:27:27.939 "ana_reporting": false 00:27:27.940 }, 00:27:27.940 "vs": { 00:27:27.940 "nvme_version": "1.4" 00:27:27.940 }, 00:27:27.940 "ns_data": { 00:27:27.940 "id": 1, 00:27:27.940 "can_share": false 00:27:27.940 } 00:27:27.940 } 00:27:27.940 ], 00:27:27.940 "mp_policy": "active_passive" 00:27:27.940 } 00:27:27.940 } 00:27:27.940 ]' 00:27:27.940 02:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:27.940 02:38:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:27.940 02:38:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:27.940 02:38:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:27:27.940 02:38:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:27:27.940 02:38:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:27:27.940 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:27.940 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:27.940 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:27.940 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:28.204 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:28.204 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=5a4888ba-06af-44a5-9a76-df5fd4ac74de 00:27:28.204 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:28.204 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a4888ba-06af-44a5-9a76-df5fd4ac74de 00:27:28.465 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:28.727 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=04139210-73a0-4e31-8bb4-4f3ec098db32 00:27:28.727 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 04139210-73a0-4e31-8bb4-4f3ec098db32 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=90a212de-2b39-450e-810f-6422748a3845 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 90a212de-2b39-450e-810f-6422748a3845 ]] 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 90a212de-2b39-450e-810f-6422748a3845 5120 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=90a212de-2b39-450e-810f-6422748a3845 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 90a212de-2b39-450e-810f-6422748a3845 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=90a212de-2b39-450e-810f-6422748a3845 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:28.989 02:38:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 90a212de-2b39-450e-810f-6422748a3845 00:27:29.251 02:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:29.251 { 00:27:29.251 "name": "90a212de-2b39-450e-810f-6422748a3845", 00:27:29.251 "aliases": [ 00:27:29.251 "lvs/basen1p0" 00:27:29.251 ], 00:27:29.251 "product_name": "Logical Volume", 00:27:29.251 "block_size": 4096, 00:27:29.251 "num_blocks": 5242880, 00:27:29.251 "uuid": "90a212de-2b39-450e-810f-6422748a3845", 00:27:29.251 "assigned_rate_limits": { 00:27:29.251 "rw_ios_per_sec": 0, 00:27:29.251 "rw_mbytes_per_sec": 0, 00:27:29.251 "r_mbytes_per_sec": 0, 00:27:29.251 "w_mbytes_per_sec": 0 00:27:29.251 }, 00:27:29.251 "claimed": false, 00:27:29.251 "zoned": false, 00:27:29.251 "supported_io_types": { 00:27:29.251 "read": true, 00:27:29.251 "write": true, 00:27:29.251 "unmap": true, 00:27:29.251 "flush": false, 00:27:29.251 "reset": true, 00:27:29.251 "nvme_admin": false, 00:27:29.251 "nvme_io": false, 00:27:29.251 "nvme_io_md": false, 00:27:29.251 "write_zeroes": true, 00:27:29.251 "zcopy": false, 00:27:29.251 "get_zone_info": false, 00:27:29.251 "zone_management": false, 00:27:29.251 "zone_append": false, 00:27:29.251 "compare": false, 00:27:29.251 "compare_and_write": false, 00:27:29.251 "abort": false, 00:27:29.251 "seek_hole": true, 00:27:29.251 "seek_data": true, 00:27:29.251 "copy": false, 00:27:29.251 "nvme_iov_md": false 00:27:29.251 }, 00:27:29.251 "driver_specific": { 00:27:29.251 "lvol": { 00:27:29.251 "lvol_store_uuid": "04139210-73a0-4e31-8bb4-4f3ec098db32", 00:27:29.251 "base_bdev": "basen1", 00:27:29.251 "thin_provision": true, 00:27:29.251 "num_allocated_clusters": 0, 00:27:29.251 "snapshot": false, 00:27:29.251 "clone": false, 00:27:29.251 "esnap_clone": false 00:27:29.251 } 00:27:29.251 } 00:27:29.251 } 00:27:29.251 ]' 00:27:29.251 02:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:29.251 02:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:29.251 02:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:29.251 02:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:27:29.251 02:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:27:29.251 02:38:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:27:29.251 02:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:29.251 02:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:29.251 02:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:29.510 02:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:29.510 02:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:29.510 02:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:29.769 02:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:29.769 02:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:29.769 02:38:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 90a212de-2b39-450e-810f-6422748a3845 -c cachen1p0 --l2p_dram_limit 2 00:27:30.027 [2024-11-04 02:38:16.886282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.027 [2024-11-04 02:38:16.886438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:30.027 [2024-11-04 02:38:16.886457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:30.027 [2024-11-04 02:38:16.886464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.027 [2024-11-04 02:38:16.886511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.027 [2024-11-04 02:38:16.886519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:30.027 [2024-11-04 02:38:16.886527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:30.027 [2024-11-04 02:38:16.886533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.027 [2024-11-04 02:38:16.886549] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:30.027 [2024-11-04 02:38:16.887091] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:30.027 [2024-11-04 02:38:16.887132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.027 [2024-11-04 02:38:16.887138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:30.027 [2024-11-04 02:38:16.887148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.584 ms 00:27:30.027 [2024-11-04 02:38:16.887154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.027 [2024-11-04 02:38:16.887290] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c4d73512-48c8-43f4-a384-6e9f2bccccb1 00:27:30.027 [2024-11-04 02:38:16.888251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.027 [2024-11-04 02:38:16.888279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:30.027 [2024-11-04 02:38:16.888288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:30.027 [2024-11-04 02:38:16.888295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.027 [2024-11-04 02:38:16.892936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.027 [2024-11-04 02:38:16.893048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:30.027 [2024-11-04 02:38:16.893060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.608 ms 00:27:30.028 [2024-11-04 02:38:16.893069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.028 [2024-11-04 02:38:16.893101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.028 [2024-11-04 02:38:16.893109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:30.028 [2024-11-04 02:38:16.893115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:30.028 [2024-11-04 02:38:16.893124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.028 [2024-11-04 02:38:16.893158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.028 [2024-11-04 02:38:16.893167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:30.028 [2024-11-04 02:38:16.893174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:30.028 [2024-11-04 02:38:16.893183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.028 [2024-11-04 02:38:16.893202] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:30.028 [2024-11-04 02:38:16.896032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.028 [2024-11-04 02:38:16.896054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:30.028 [2024-11-04 02:38:16.896063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.834 ms 00:27:30.028 [2024-11-04 02:38:16.896072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.028 [2024-11-04 02:38:16.896092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.028 [2024-11-04 02:38:16.896099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:30.028 [2024-11-04 02:38:16.896106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:30.028 [2024-11-04 02:38:16.896112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.028 [2024-11-04 02:38:16.896133] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:30.028 [2024-11-04 02:38:16.896238] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:30.028 [2024-11-04 02:38:16.896301] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:30.028 [2024-11-04 02:38:16.896311] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:30.028 [2024-11-04 02:38:16.896321] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:30.028 [2024-11-04 02:38:16.896328] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:30.028 [2024-11-04 02:38:16.896335] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:30.028 [2024-11-04 02:38:16.896341] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:30.028 [2024-11-04 02:38:16.896348] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:30.028 [2024-11-04 02:38:16.896353] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:30.028 [2024-11-04 02:38:16.896362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.028 [2024-11-04 02:38:16.896368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:30.028 [2024-11-04 02:38:16.896375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.231 ms 00:27:30.028 [2024-11-04 02:38:16.896380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.028 [2024-11-04 02:38:16.896445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.028 [2024-11-04 02:38:16.896452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:30.028 [2024-11-04 02:38:16.896460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:30.028 [2024-11-04 02:38:16.896470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.028 [2024-11-04 02:38:16.896544] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:30.028 [2024-11-04 02:38:16.896553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:30.028 [2024-11-04 02:38:16.896561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:30.028 [2024-11-04 02:38:16.896567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:30.028 [2024-11-04 02:38:16.896579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:30.028 [2024-11-04 02:38:16.896591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:30.028 [2024-11-04 02:38:16.896597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:30.028 [2024-11-04 02:38:16.896602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:30.028 [2024-11-04 02:38:16.896613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:30.028 [2024-11-04 02:38:16.896619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:30.028 [2024-11-04 02:38:16.896631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:30.028 [2024-11-04 02:38:16.896636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:30.028 [2024-11-04 02:38:16.896648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:30.028 [2024-11-04 02:38:16.896654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:30.028 [2024-11-04 02:38:16.896667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:30.028 [2024-11-04 02:38:16.896672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.028 [2024-11-04 02:38:16.896679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:30.028 [2024-11-04 02:38:16.896684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:30.028 [2024-11-04 02:38:16.896691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.028 [2024-11-04 02:38:16.896696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:30.028 [2024-11-04 02:38:16.896703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:30.028 [2024-11-04 02:38:16.896707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.028 [2024-11-04 02:38:16.896714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:30.028 [2024-11-04 02:38:16.896718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:30.028 [2024-11-04 02:38:16.896725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.028 [2024-11-04 02:38:16.896730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:30.028 [2024-11-04 02:38:16.896737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:30.028 [2024-11-04 02:38:16.896743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:30.028 [2024-11-04 02:38:16.896754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:30.028 [2024-11-04 02:38:16.896760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:30.028 [2024-11-04 02:38:16.896772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:30.028 [2024-11-04 02:38:16.896789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:30.028 [2024-11-04 02:38:16.896795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896800] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:30.028 [2024-11-04 02:38:16.896807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:30.028 [2024-11-04 02:38:16.896813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:30.028 [2024-11-04 02:38:16.896819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.028 [2024-11-04 02:38:16.896825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:30.028 [2024-11-04 02:38:16.896834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:30.028 [2024-11-04 02:38:16.896839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:30.028 [2024-11-04 02:38:16.896845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:30.028 [2024-11-04 02:38:16.896850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:30.028 [2024-11-04 02:38:16.896856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:30.028 [2024-11-04 02:38:16.896882] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:30.028 [2024-11-04 02:38:16.896891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.028 [2024-11-04 02:38:16.896898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:30.028 [2024-11-04 02:38:16.896906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:30.028 [2024-11-04 02:38:16.896911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:30.028 [2024-11-04 02:38:16.896918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:30.028 [2024-11-04 02:38:16.896924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:30.028 [2024-11-04 02:38:16.896930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:30.028 [2024-11-04 02:38:16.896936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:30.028 [2024-11-04 02:38:16.896943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:30.028 [2024-11-04 02:38:16.896948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:30.028 [2024-11-04 02:38:16.896957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:30.028 [2024-11-04 02:38:16.896963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:30.029 [2024-11-04 02:38:16.896970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:30.029 [2024-11-04 02:38:16.896975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:30.029 [2024-11-04 02:38:16.896982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:30.029 [2024-11-04 02:38:16.896987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:30.029 [2024-11-04 02:38:16.896995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.029 [2024-11-04 02:38:16.897003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:30.029 [2024-11-04 02:38:16.897011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:30.029 [2024-11-04 02:38:16.897017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:30.029 [2024-11-04 02:38:16.897024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:30.029 [2024-11-04 02:38:16.897030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.029 [2024-11-04 02:38:16.897036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:30.029 [2024-11-04 02:38:16.897042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.539 ms 00:27:30.029 [2024-11-04 02:38:16.897049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.029 [2024-11-04 02:38:16.897078] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:30.029 [2024-11-04 02:38:16.897088] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:33.319 [2024-11-04 02:38:20.383722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.319 [2024-11-04 02:38:20.383821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:33.319 [2024-11-04 02:38:20.383841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3486.627 ms 00:27:33.319 [2024-11-04 02:38:20.383853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.319 [2024-11-04 02:38:20.415790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.319 [2024-11-04 02:38:20.415883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:33.319 [2024-11-04 02:38:20.415899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.654 ms 00:27:33.319 [2024-11-04 02:38:20.415911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.319 [2024-11-04 02:38:20.416001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.319 [2024-11-04 02:38:20.416016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:33.319 [2024-11-04 02:38:20.416026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:33.319 [2024-11-04 02:38:20.416040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.578 [2024-11-04 02:38:20.452355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.578 [2024-11-04 02:38:20.452418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:33.578 [2024-11-04 02:38:20.452432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.262 ms 00:27:33.578 [2024-11-04 02:38:20.452445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.578 [2024-11-04 02:38:20.452485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.578 [2024-11-04 02:38:20.452496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:33.578 [2024-11-04 02:38:20.452506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:33.578 [2024-11-04 02:38:20.452519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.578 [2024-11-04 02:38:20.453161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.578 [2024-11-04 02:38:20.453195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:33.578 [2024-11-04 02:38:20.453206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.586 ms 00:27:33.578 [2024-11-04 02:38:20.453217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.578 [2024-11-04 02:38:20.453274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.578 [2024-11-04 02:38:20.453286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:33.578 [2024-11-04 02:38:20.453295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:33.578 [2024-11-04 02:38:20.453308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.578 [2024-11-04 02:38:20.471073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.578 [2024-11-04 02:38:20.471126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:33.578 [2024-11-04 02:38:20.471138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.742 ms 00:27:33.578 [2024-11-04 02:38:20.471151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.578 [2024-11-04 02:38:20.484561] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:33.578 [2024-11-04 02:38:20.485949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.578 [2024-11-04 02:38:20.486219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:33.578 [2024-11-04 02:38:20.486247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.703 ms 00:27:33.578 [2024-11-04 02:38:20.486257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.578 [2024-11-04 02:38:20.529279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.578 [2024-11-04 02:38:20.529537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:33.578 [2024-11-04 02:38:20.529571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.979 ms 00:27:33.578 [2024-11-04 02:38:20.529581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.578 [2024-11-04 02:38:20.529691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.578 [2024-11-04 02:38:20.529703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:33.578 [2024-11-04 02:38:20.529718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:33.578 [2024-11-04 02:38:20.529730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.578 [2024-11-04 02:38:20.556302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.578 [2024-11-04 02:38:20.556505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:33.579 [2024-11-04 02:38:20.556535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.507 ms 00:27:33.579 [2024-11-04 02:38:20.556544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.579 [2024-11-04 02:38:20.582844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.579 [2024-11-04 02:38:20.582911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:33.579 [2024-11-04 02:38:20.582933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.996 ms 00:27:33.579 [2024-11-04 02:38:20.582941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.579 [2024-11-04 02:38:20.583584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.579 [2024-11-04 02:38:20.583604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:33.579 [2024-11-04 02:38:20.583616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.587 ms 00:27:33.579 [2024-11-04 02:38:20.583652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.579 [2024-11-04 02:38:20.665799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.579 [2024-11-04 02:38:20.665858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:33.579 [2024-11-04 02:38:20.665896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.076 ms 00:27:33.579 [2024-11-04 02:38:20.665905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.838 [2024-11-04 02:38:20.695036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.838 [2024-11-04 02:38:20.695099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:33.838 [2024-11-04 02:38:20.695129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.018 ms 00:27:33.838 [2024-11-04 02:38:20.695139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.838 [2024-11-04 02:38:20.722371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.838 [2024-11-04 02:38:20.722427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:33.838 [2024-11-04 02:38:20.722445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.161 ms 00:27:33.838 [2024-11-04 02:38:20.722453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.838 [2024-11-04 02:38:20.749569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.838 [2024-11-04 02:38:20.749798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:33.838 [2024-11-04 02:38:20.749828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.057 ms 00:27:33.838 [2024-11-04 02:38:20.749837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.838 [2024-11-04 02:38:20.749912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.838 [2024-11-04 02:38:20.749923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:33.838 [2024-11-04 02:38:20.749938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:33.838 [2024-11-04 02:38:20.749945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.838 [2024-11-04 02:38:20.750058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.838 [2024-11-04 02:38:20.750070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:33.838 [2024-11-04 02:38:20.750082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:33.838 [2024-11-04 02:38:20.750090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.838 [2024-11-04 02:38:20.751337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3864.526 ms, result 0 00:27:33.838 { 00:27:33.838 "name": "ftl", 00:27:33.838 "uuid": "c4d73512-48c8-43f4-a384-6e9f2bccccb1" 00:27:33.838 } 00:27:33.838 02:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:34.097 [2024-11-04 02:38:20.978381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.097 02:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:34.357 02:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:34.357 [2024-11-04 02:38:21.422861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:34.357 02:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:34.616 [2024-11-04 02:38:21.652454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:34.616 02:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:35.185 Fill FTL, iteration 1 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80380 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80380 /var/tmp/spdk.tgt.sock 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80380 ']' 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:35.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:35.185 02:38:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:35.185 [2024-11-04 02:38:22.111453] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:27:35.185 [2024-11-04 02:38:22.111830] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80380 ] 00:27:35.185 [2024-11-04 02:38:22.273984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.457 [2024-11-04 02:38:22.399325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.031 02:38:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:36.031 02:38:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:36.031 02:38:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:36.289 ftln1 00:27:36.289 02:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:36.289 02:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80380 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80380 ']' 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80380 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80380 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:36.547 killing process with pid 80380 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80380' 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80380 00:27:36.547 02:38:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80380 00:27:37.922 02:38:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:37.922 02:38:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:37.922 [2024-11-04 02:38:25.022789] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:27:37.922 [2024-11-04 02:38:25.023119] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80427 ] 00:27:38.180 [2024-11-04 02:38:25.178094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.180 [2024-11-04 02:38:25.281897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.559  [2024-11-04T02:38:28.054Z] Copying: 191/1024 [MB] (191 MBps) [2024-11-04T02:38:28.996Z] Copying: 425/1024 [MB] (234 MBps) [2024-11-04T02:38:29.937Z] Copying: 677/1024 [MB] (252 MBps) [2024-11-04T02:38:30.198Z] Copying: 921/1024 [MB] (244 MBps) [2024-11-04T02:38:30.769Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:27:43.658 00:27:43.659 Calculate MD5 checksum, iteration 1 00:27:43.659 02:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:43.659 02:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:43.659 02:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:43.659 02:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:43.659 02:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:43.659 02:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:43.659 02:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:43.659 02:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:43.920 [2024-11-04 02:38:30.794470] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:27:43.920 [2024-11-04 02:38:30.794732] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80492 ] 00:27:43.920 [2024-11-04 02:38:30.949111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.180 [2024-11-04 02:38:31.039677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.564  [2024-11-04T02:38:32.935Z] Copying: 672/1024 [MB] (672 MBps) [2024-11-04T02:38:33.503Z] Copying: 1024/1024 [MB] (average 660 MBps) 00:27:46.392 00:27:46.392 02:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:46.392 02:38:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:48.934 02:38:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:48.934 Fill FTL, iteration 2 00:27:48.934 02:38:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b37e3e856fc378715c313f950aee231d 00:27:48.934 02:38:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:48.934 02:38:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:48.934 02:38:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:48.934 02:38:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:48.934 02:38:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:48.934 02:38:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:48.934 02:38:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:48.934 02:38:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:48.934 02:38:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:48.934 [2024-11-04 02:38:35.703662] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:27:48.934 [2024-11-04 02:38:35.703810] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80551 ] 00:27:48.934 [2024-11-04 02:38:35.867451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.934 [2024-11-04 02:38:35.962568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.318  [2024-11-04T02:38:38.373Z] Copying: 231/1024 [MB] (231 MBps) [2024-11-04T02:38:39.318Z] Copying: 472/1024 [MB] (241 MBps) [2024-11-04T02:38:40.738Z] Copying: 712/1024 [MB] (240 MBps) [2024-11-04T02:38:40.738Z] Copying: 957/1024 [MB] (245 MBps) [2024-11-04T02:38:41.311Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:27:54.200 00:27:54.200 Calculate MD5 checksum, iteration 2 00:27:54.200 02:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:54.200 02:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:54.200 02:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:54.200 02:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:54.200 02:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:54.200 02:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:54.200 02:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:54.200 02:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:54.200 [2024-11-04 02:38:41.275273] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:27:54.200 [2024-11-04 02:38:41.275603] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80604 ] 00:27:54.461 [2024-11-04 02:38:41.431935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.461 [2024-11-04 02:38:41.538743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.378  [2024-11-04T02:38:43.751Z] Copying: 644/1024 [MB] (644 MBps) [2024-11-04T02:38:44.694Z] Copying: 1024/1024 [MB] (average 644 MBps) 00:27:57.583 00:27:57.583 02:38:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:57.583 02:38:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:00.117 02:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:00.117 02:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1b5bb72ef51c0e571b8b5f444024af00 00:28:00.117 02:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:00.117 02:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:00.117 02:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:00.117 [2024-11-04 02:38:46.828009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.117 [2024-11-04 02:38:46.828049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:00.117 [2024-11-04 02:38:46.828059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:00.117 [2024-11-04 02:38:46.828067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.117 [2024-11-04 02:38:46.828084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.117 [2024-11-04 02:38:46.828090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:00.117 [2024-11-04 02:38:46.828096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:00.117 [2024-11-04 02:38:46.828102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.117 [2024-11-04 02:38:46.828120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.117 [2024-11-04 02:38:46.828126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:00.117 [2024-11-04 02:38:46.828132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:00.117 [2024-11-04 02:38:46.828137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.117 [2024-11-04 02:38:46.828184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.167 ms, result 0 00:28:00.117 true 00:28:00.117 02:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:00.117 { 00:28:00.117 "name": "ftl", 00:28:00.117 "properties": [ 00:28:00.117 { 00:28:00.117 "name": "superblock_version", 00:28:00.117 "value": 5, 00:28:00.117 "read-only": true 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "name": "base_device", 00:28:00.117 "bands": [ 00:28:00.117 { 00:28:00.117 "id": 0, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 1, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 2, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 3, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 4, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 5, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 6, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 7, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 8, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 9, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 10, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 11, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 12, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 13, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 14, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 15, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 16, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 17, 00:28:00.117 "state": "FREE", 00:28:00.117 "validity": 0.0 00:28:00.117 } 00:28:00.117 ], 00:28:00.117 "read-only": true 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "name": "cache_device", 00:28:00.117 "type": "bdev", 00:28:00.117 "chunks": [ 00:28:00.117 { 00:28:00.117 "id": 0, 00:28:00.117 "state": "INACTIVE", 00:28:00.117 "utilization": 0.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 1, 00:28:00.117 "state": "CLOSED", 00:28:00.117 "utilization": 1.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 2, 00:28:00.117 "state": "CLOSED", 00:28:00.117 "utilization": 1.0 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 3, 00:28:00.117 "state": "OPEN", 00:28:00.117 "utilization": 0.001953125 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "id": 4, 00:28:00.117 "state": "OPEN", 00:28:00.117 "utilization": 0.0 00:28:00.117 } 00:28:00.117 ], 00:28:00.117 "read-only": true 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "name": "verbose_mode", 00:28:00.117 "value": true, 00:28:00.117 "unit": "", 00:28:00.117 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:00.117 }, 00:28:00.117 { 00:28:00.117 "name": "prep_upgrade_on_shutdown", 00:28:00.117 "value": false, 00:28:00.117 "unit": "", 00:28:00.117 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:00.117 } 00:28:00.118 ] 00:28:00.118 } 00:28:00.118 02:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:00.376 [2024-11-04 02:38:47.232298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.376 [2024-11-04 02:38:47.232417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:00.376 [2024-11-04 02:38:47.232465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:00.376 [2024-11-04 02:38:47.232483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.376 [2024-11-04 02:38:47.232514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.376 [2024-11-04 02:38:47.232530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:00.376 [2024-11-04 02:38:47.232544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:00.376 [2024-11-04 02:38:47.232559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.376 [2024-11-04 02:38:47.232582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.376 [2024-11-04 02:38:47.232597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:00.376 [2024-11-04 02:38:47.232612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:00.376 [2024-11-04 02:38:47.232658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.376 [2024-11-04 02:38:47.232715] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.402 ms, result 0 00:28:00.376 true 00:28:00.376 02:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:00.376 02:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:00.376 02:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:00.376 02:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:00.376 02:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:00.376 02:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:00.634 [2024-11-04 02:38:47.652364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.634 [2024-11-04 02:38:47.652396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:00.634 [2024-11-04 02:38:47.652405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:00.634 [2024-11-04 02:38:47.652411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.634 [2024-11-04 02:38:47.652427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.634 [2024-11-04 02:38:47.652433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:00.634 [2024-11-04 02:38:47.652439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:00.634 [2024-11-04 02:38:47.652444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.634 [2024-11-04 02:38:47.652459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.634 [2024-11-04 02:38:47.652464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:00.634 [2024-11-04 02:38:47.652470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:00.634 [2024-11-04 02:38:47.652475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.634 [2024-11-04 02:38:47.652515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.142 ms, result 0 00:28:00.634 true 00:28:00.634 02:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:00.893 { 00:28:00.893 "name": "ftl", 00:28:00.893 "properties": [ 00:28:00.893 { 00:28:00.893 "name": "superblock_version", 00:28:00.893 "value": 5, 00:28:00.893 "read-only": true 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "name": "base_device", 00:28:00.893 "bands": [ 00:28:00.893 { 00:28:00.893 "id": 0, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 1, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 2, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 3, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 4, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 5, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 6, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 7, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 8, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 9, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 10, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 11, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 12, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 13, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 14, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 15, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 16, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 17, 00:28:00.893 "state": "FREE", 00:28:00.893 "validity": 0.0 00:28:00.893 } 00:28:00.893 ], 00:28:00.893 "read-only": true 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "name": "cache_device", 00:28:00.893 "type": "bdev", 00:28:00.893 "chunks": [ 00:28:00.893 { 00:28:00.893 "id": 0, 00:28:00.893 "state": "INACTIVE", 00:28:00.893 "utilization": 0.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 1, 00:28:00.893 "state": "CLOSED", 00:28:00.893 "utilization": 1.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 2, 00:28:00.893 "state": "CLOSED", 00:28:00.893 "utilization": 1.0 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 3, 00:28:00.893 "state": "OPEN", 00:28:00.893 "utilization": 0.001953125 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "id": 4, 00:28:00.893 "state": "OPEN", 00:28:00.893 "utilization": 0.0 00:28:00.893 } 00:28:00.893 ], 00:28:00.893 "read-only": true 00:28:00.893 }, 00:28:00.893 { 00:28:00.893 "name": "verbose_mode", 00:28:00.894 "value": true, 00:28:00.894 "unit": "", 00:28:00.894 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:00.894 }, 00:28:00.894 { 00:28:00.894 "name": "prep_upgrade_on_shutdown", 00:28:00.894 "value": true, 00:28:00.894 "unit": "", 00:28:00.894 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:00.894 } 00:28:00.894 ] 00:28:00.894 } 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80258 ]] 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80258 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80258 ']' 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80258 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80258 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:00.894 killing process with pid 80258 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80258' 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80258 00:28:00.894 02:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80258 00:28:01.460 [2024-11-04 02:38:48.420092] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:01.460 [2024-11-04 02:38:48.432176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.460 [2024-11-04 02:38:48.432211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:01.460 [2024-11-04 02:38:48.432221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:01.460 [2024-11-04 02:38:48.432227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.460 [2024-11-04 02:38:48.432245] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:01.460 [2024-11-04 02:38:48.434311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.460 [2024-11-04 02:38:48.434336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:01.460 [2024-11-04 02:38:48.434344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.056 ms 00:28:01.460 [2024-11-04 02:38:48.434350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.457 [2024-11-04 02:38:56.799098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.457 [2024-11-04 02:38:56.799297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:11.457 [2024-11-04 02:38:56.799314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8364.700 ms 00:28:11.457 [2024-11-04 02:38:56.799321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.457 [2024-11-04 02:38:56.800423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.457 [2024-11-04 02:38:56.800443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:11.457 [2024-11-04 02:38:56.800450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.087 ms 00:28:11.457 [2024-11-04 02:38:56.800456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.457 [2024-11-04 02:38:56.801319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.457 [2024-11-04 02:38:56.801339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:11.457 [2024-11-04 02:38:56.801346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.842 ms 00:28:11.457 [2024-11-04 02:38:56.801352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.457 [2024-11-04 02:38:56.808678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.457 [2024-11-04 02:38:56.808706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:11.457 [2024-11-04 02:38:56.808713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.199 ms 00:28:11.457 [2024-11-04 02:38:56.808719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.457 [2024-11-04 02:38:56.813671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.457 [2024-11-04 02:38:56.813698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:11.457 [2024-11-04 02:38:56.813707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.926 ms 00:28:11.457 [2024-11-04 02:38:56.813714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.457 [2024-11-04 02:38:56.813779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.457 [2024-11-04 02:38:56.813787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:11.457 [2024-11-04 02:38:56.813795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:11.457 [2024-11-04 02:38:56.813801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.457 [2024-11-04 02:38:56.821083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.457 [2024-11-04 02:38:56.821108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:11.457 [2024-11-04 02:38:56.821116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.267 ms 00:28:11.457 [2024-11-04 02:38:56.821121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.457 [2024-11-04 02:38:56.828156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.457 [2024-11-04 02:38:56.828264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:11.457 [2024-11-04 02:38:56.828275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.010 ms 00:28:11.457 [2024-11-04 02:38:56.828280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.457 [2024-11-04 02:38:56.835118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.457 [2024-11-04 02:38:56.835212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:11.457 [2024-11-04 02:38:56.835223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.814 ms 00:28:11.457 [2024-11-04 02:38:56.835228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.457 [2024-11-04 02:38:56.842149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.457 [2024-11-04 02:38:56.842238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:11.457 [2024-11-04 02:38:56.842249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.872 ms 00:28:11.457 [2024-11-04 02:38:56.842254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.457 [2024-11-04 02:38:56.842276] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:11.457 [2024-11-04 02:38:56.842286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:11.457 [2024-11-04 02:38:56.842294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:11.457 [2024-11-04 02:38:56.842307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:11.457 [2024-11-04 02:38:56.842313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:11.457 [2024-11-04 02:38:56.842400] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:11.457 [2024-11-04 02:38:56.842405] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c4d73512-48c8-43f4-a384-6e9f2bccccb1 00:28:11.458 [2024-11-04 02:38:56.842411] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:11.458 [2024-11-04 02:38:56.842417] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:11.458 [2024-11-04 02:38:56.842422] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:11.458 [2024-11-04 02:38:56.842428] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:11.458 [2024-11-04 02:38:56.842433] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:11.458 [2024-11-04 02:38:56.842439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:11.458 [2024-11-04 02:38:56.842445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:11.458 [2024-11-04 02:38:56.842450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:11.458 [2024-11-04 02:38:56.842459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:11.458 [2024-11-04 02:38:56.842465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.458 [2024-11-04 02:38:56.842474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:11.458 [2024-11-04 02:38:56.842482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:28:11.458 [2024-11-04 02:38:56.842488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.851998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.458 [2024-11-04 02:38:56.852022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:11.458 [2024-11-04 02:38:56.852030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.490 ms 00:28:11.458 [2024-11-04 02:38:56.852036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.852305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.458 [2024-11-04 02:38:56.852316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:11.458 [2024-11-04 02:38:56.852323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:28:11.458 [2024-11-04 02:38:56.852329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.885123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.885223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:11.458 [2024-11-04 02:38:56.885235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.885241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.885268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.885275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:11.458 [2024-11-04 02:38:56.885280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.885287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.885341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.885349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:11.458 [2024-11-04 02:38:56.885355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.885361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.885374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.885383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:11.458 [2024-11-04 02:38:56.885389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.885395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.944609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.944722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:11.458 [2024-11-04 02:38:56.944734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.944740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.993443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.993475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:11.458 [2024-11-04 02:38:56.993483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.993489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.993553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.993561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:11.458 [2024-11-04 02:38:56.993567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.993573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.993605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.993612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:11.458 [2024-11-04 02:38:56.993622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.993628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.993697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.993705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:11.458 [2024-11-04 02:38:56.993711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.993717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.993739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.993746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:11.458 [2024-11-04 02:38:56.993752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.993760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.993788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.993795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:11.458 [2024-11-04 02:38:56.993801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.993807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.993841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:11.458 [2024-11-04 02:38:56.993847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:11.458 [2024-11-04 02:38:56.993856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:11.458 [2024-11-04 02:38:56.993862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.458 [2024-11-04 02:38:56.993968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8561.747 ms, result 0 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:18.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80806 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80806 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80806 ']' 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:18.038 02:39:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:18.038 [2024-11-04 02:39:04.020572] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:18.038 [2024-11-04 02:39:04.021392] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80806 ] 00:28:18.038 [2024-11-04 02:39:04.186431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.038 [2024-11-04 02:39:04.319813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.038 [2024-11-04 02:39:05.109969] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:18.038 [2024-11-04 02:39:05.113217] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:18.300 [2024-11-04 02:39:05.263649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.263884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:18.300 [2024-11-04 02:39:05.263911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:18.300 [2024-11-04 02:39:05.263922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.264000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.264012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:18.300 [2024-11-04 02:39:05.264021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:28:18.300 [2024-11-04 02:39:05.264029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.264059] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:18.300 [2024-11-04 02:39:05.264812] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:18.300 [2024-11-04 02:39:05.264843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.264853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:18.300 [2024-11-04 02:39:05.264883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.794 ms 00:28:18.300 [2024-11-04 02:39:05.264892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.266660] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:18.300 [2024-11-04 02:39:05.281343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.281395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:18.300 [2024-11-04 02:39:05.281409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.685 ms 00:28:18.300 [2024-11-04 02:39:05.281425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.281505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.281516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:18.300 [2024-11-04 02:39:05.281526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:18.300 [2024-11-04 02:39:05.281533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.289949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.290162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:18.300 [2024-11-04 02:39:05.290180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.325 ms 00:28:18.300 [2024-11-04 02:39:05.290190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.290265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.290275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:18.300 [2024-11-04 02:39:05.290285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:28:18.300 [2024-11-04 02:39:05.290293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.290343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.290354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:18.300 [2024-11-04 02:39:05.290364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:18.300 [2024-11-04 02:39:05.290376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.290402] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:18.300 [2024-11-04 02:39:05.294539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.294582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:18.300 [2024-11-04 02:39:05.294593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.143 ms 00:28:18.300 [2024-11-04 02:39:05.294601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.294636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.294646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:18.300 [2024-11-04 02:39:05.294656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:18.300 [2024-11-04 02:39:05.294663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.294724] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:18.300 [2024-11-04 02:39:05.294748] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:18.300 [2024-11-04 02:39:05.294788] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:18.300 [2024-11-04 02:39:05.294805] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:18.300 [2024-11-04 02:39:05.294935] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:18.300 [2024-11-04 02:39:05.294949] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:18.300 [2024-11-04 02:39:05.294960] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:18.300 [2024-11-04 02:39:05.294973] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:18.300 [2024-11-04 02:39:05.294984] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:18.300 [2024-11-04 02:39:05.294993] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:18.300 [2024-11-04 02:39:05.295004] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:18.300 [2024-11-04 02:39:05.295012] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:18.300 [2024-11-04 02:39:05.295021] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:18.300 [2024-11-04 02:39:05.295030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.295039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:18.300 [2024-11-04 02:39:05.295050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:28:18.300 [2024-11-04 02:39:05.295058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.295143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.300 [2024-11-04 02:39:05.295153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:18.300 [2024-11-04 02:39:05.295161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:28:18.300 [2024-11-04 02:39:05.295174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.300 [2024-11-04 02:39:05.295284] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:18.300 [2024-11-04 02:39:05.295297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:18.300 [2024-11-04 02:39:05.295307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:18.300 [2024-11-04 02:39:05.295315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.300 [2024-11-04 02:39:05.295323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:18.300 [2024-11-04 02:39:05.295330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:18.300 [2024-11-04 02:39:05.295339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:18.300 [2024-11-04 02:39:05.295348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:18.300 [2024-11-04 02:39:05.295358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:18.300 [2024-11-04 02:39:05.295365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.300 [2024-11-04 02:39:05.295373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:18.300 [2024-11-04 02:39:05.295381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:18.300 [2024-11-04 02:39:05.295396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.300 [2024-11-04 02:39:05.295405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:18.300 [2024-11-04 02:39:05.295413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:18.300 [2024-11-04 02:39:05.295420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.300 [2024-11-04 02:39:05.295427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:18.300 [2024-11-04 02:39:05.295434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:18.300 [2024-11-04 02:39:05.295443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.300 [2024-11-04 02:39:05.295452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:18.300 [2024-11-04 02:39:05.295459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:18.300 [2024-11-04 02:39:05.295465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:18.300 [2024-11-04 02:39:05.295472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:18.300 [2024-11-04 02:39:05.295479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:18.300 [2024-11-04 02:39:05.295486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:18.300 [2024-11-04 02:39:05.295499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:18.300 [2024-11-04 02:39:05.295506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:18.300 [2024-11-04 02:39:05.295516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:18.300 [2024-11-04 02:39:05.295524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:18.300 [2024-11-04 02:39:05.295530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:18.300 [2024-11-04 02:39:05.295536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:18.300 [2024-11-04 02:39:05.295543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:18.301 [2024-11-04 02:39:05.295550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:18.301 [2024-11-04 02:39:05.295556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.301 [2024-11-04 02:39:05.295562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:18.301 [2024-11-04 02:39:05.295569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:18.301 [2024-11-04 02:39:05.295575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.301 [2024-11-04 02:39:05.295583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:18.301 [2024-11-04 02:39:05.295592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:18.301 [2024-11-04 02:39:05.295598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.301 [2024-11-04 02:39:05.295604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:18.301 [2024-11-04 02:39:05.295611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:18.301 [2024-11-04 02:39:05.295632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.301 [2024-11-04 02:39:05.295639] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:18.301 [2024-11-04 02:39:05.295651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:18.301 [2024-11-04 02:39:05.295663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:18.301 [2024-11-04 02:39:05.295673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:18.301 [2024-11-04 02:39:05.295682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:18.301 [2024-11-04 02:39:05.295690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:18.301 [2024-11-04 02:39:05.295697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:18.301 [2024-11-04 02:39:05.295704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:18.301 [2024-11-04 02:39:05.295710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:18.301 [2024-11-04 02:39:05.295717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:18.301 [2024-11-04 02:39:05.295728] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:18.301 [2024-11-04 02:39:05.295740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:18.301 [2024-11-04 02:39:05.295750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:18.301 [2024-11-04 02:39:05.295757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:18.301 [2024-11-04 02:39:05.295765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:18.301 [2024-11-04 02:39:05.295772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:18.301 [2024-11-04 02:39:05.295781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:18.301 [2024-11-04 02:39:05.295789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:18.301 [2024-11-04 02:39:05.295797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:18.301 [2024-11-04 02:39:05.295805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:18.301 [2024-11-04 02:39:05.295812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:18.301 [2024-11-04 02:39:05.295820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:18.301 [2024-11-04 02:39:05.295827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:18.301 [2024-11-04 02:39:05.295836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:18.301 [2024-11-04 02:39:05.295845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:18.301 [2024-11-04 02:39:05.295853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:18.301 [2024-11-04 02:39:05.295860] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:18.301 [2024-11-04 02:39:05.295884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:18.301 [2024-11-04 02:39:05.295893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:18.301 [2024-11-04 02:39:05.295901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:18.301 [2024-11-04 02:39:05.295908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:18.301 [2024-11-04 02:39:05.295918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:18.301 [2024-11-04 02:39:05.295926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:18.301 [2024-11-04 02:39:05.295935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:18.301 [2024-11-04 02:39:05.295946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.711 ms 00:28:18.301 [2024-11-04 02:39:05.295954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:18.301 [2024-11-04 02:39:05.296002] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:18.301 [2024-11-04 02:39:05.296022] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:22.543 [2024-11-04 02:39:09.609282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.543 [2024-11-04 02:39:09.609365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:22.543 [2024-11-04 02:39:09.609383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4313.262 ms 00:28:22.543 [2024-11-04 02:39:09.609393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.543 [2024-11-04 02:39:09.641026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.543 [2024-11-04 02:39:09.641086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:22.543 [2024-11-04 02:39:09.641102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.384 ms 00:28:22.543 [2024-11-04 02:39:09.641112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.543 [2024-11-04 02:39:09.641208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.543 [2024-11-04 02:39:09.641219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:22.543 [2024-11-04 02:39:09.641235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:22.543 [2024-11-04 02:39:09.641244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.676491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.676542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:22.804 [2024-11-04 02:39:09.676554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.205 ms 00:28:22.804 [2024-11-04 02:39:09.676563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.676600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.676609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:22.804 [2024-11-04 02:39:09.676619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:22.804 [2024-11-04 02:39:09.676628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.677223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.677249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:22.804 [2024-11-04 02:39:09.677261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:28:22.804 [2024-11-04 02:39:09.677272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.677331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.677345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:22.804 [2024-11-04 02:39:09.677354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:22.804 [2024-11-04 02:39:09.677363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.694986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.695271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:22.804 [2024-11-04 02:39:09.695291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.600 ms 00:28:22.804 [2024-11-04 02:39:09.695300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.709361] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:22.804 [2024-11-04 02:39:09.709411] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:22.804 [2024-11-04 02:39:09.709426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.709436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:22.804 [2024-11-04 02:39:09.709446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.997 ms 00:28:22.804 [2024-11-04 02:39:09.709454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.724236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.724283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:22.804 [2024-11-04 02:39:09.724296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.727 ms 00:28:22.804 [2024-11-04 02:39:09.724305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.736580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.736624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:22.804 [2024-11-04 02:39:09.736636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.219 ms 00:28:22.804 [2024-11-04 02:39:09.736644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.749116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.749158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:22.804 [2024-11-04 02:39:09.749170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.422 ms 00:28:22.804 [2024-11-04 02:39:09.749179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.749829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.749848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:22.804 [2024-11-04 02:39:09.749862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.532 ms 00:28:22.804 [2024-11-04 02:39:09.749911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.823861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.823948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:22.804 [2024-11-04 02:39:09.823965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.925 ms 00:28:22.804 [2024-11-04 02:39:09.823975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.835168] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:22.804 [2024-11-04 02:39:09.836173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.836218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:22.804 [2024-11-04 02:39:09.836231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.136 ms 00:28:22.804 [2024-11-04 02:39:09.836240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.836328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.836340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:22.804 [2024-11-04 02:39:09.836354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:22.804 [2024-11-04 02:39:09.836363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.836427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.836439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:22.804 [2024-11-04 02:39:09.836448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:22.804 [2024-11-04 02:39:09.836457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.836482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.836491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:22.804 [2024-11-04 02:39:09.836501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:22.804 [2024-11-04 02:39:09.836514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.836552] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:22.804 [2024-11-04 02:39:09.836563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.836575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:22.804 [2024-11-04 02:39:09.836585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:22.804 [2024-11-04 02:39:09.836593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.804 [2024-11-04 02:39:09.861792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.804 [2024-11-04 02:39:09.861841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:22.804 [2024-11-04 02:39:09.861862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.178 ms 00:28:22.804 [2024-11-04 02:39:09.861894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.805 [2024-11-04 02:39:09.861986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.805 [2024-11-04 02:39:09.861999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:22.805 [2024-11-04 02:39:09.862009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:22.805 [2024-11-04 02:39:09.862018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.805 [2024-11-04 02:39:09.863284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4599.111 ms, result 0 00:28:22.805 [2024-11-04 02:39:09.878247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.805 [2024-11-04 02:39:09.894249] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:22.805 [2024-11-04 02:39:09.902424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:23.066 02:39:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:23.066 02:39:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:23.066 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:23.067 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:23.067 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:23.328 [2024-11-04 02:39:10.262621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.328 [2024-11-04 02:39:10.262671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:23.329 [2024-11-04 02:39:10.262685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:23.329 [2024-11-04 02:39:10.262694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.329 [2024-11-04 02:39:10.262721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.329 [2024-11-04 02:39:10.262730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:23.329 [2024-11-04 02:39:10.262740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:23.329 [2024-11-04 02:39:10.262749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.329 [2024-11-04 02:39:10.262784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.329 [2024-11-04 02:39:10.262794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:23.329 [2024-11-04 02:39:10.262803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:23.329 [2024-11-04 02:39:10.262811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.329 [2024-11-04 02:39:10.262896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.238 ms, result 0 00:28:23.329 true 00:28:23.329 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:23.590 { 00:28:23.590 "name": "ftl", 00:28:23.590 "properties": [ 00:28:23.590 { 00:28:23.590 "name": "superblock_version", 00:28:23.590 "value": 5, 00:28:23.590 "read-only": true 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "name": "base_device", 00:28:23.590 "bands": [ 00:28:23.590 { 00:28:23.590 "id": 0, 00:28:23.590 "state": "CLOSED", 00:28:23.590 "validity": 1.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 1, 00:28:23.590 "state": "CLOSED", 00:28:23.590 "validity": 1.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 2, 00:28:23.590 "state": "CLOSED", 00:28:23.590 "validity": 0.007843137254901933 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 3, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 4, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 5, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 6, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 7, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 8, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 9, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 10, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 11, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 12, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 13, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 14, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 15, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 16, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 17, 00:28:23.590 "state": "FREE", 00:28:23.590 "validity": 0.0 00:28:23.590 } 00:28:23.590 ], 00:28:23.590 "read-only": true 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "name": "cache_device", 00:28:23.590 "type": "bdev", 00:28:23.590 "chunks": [ 00:28:23.590 { 00:28:23.590 "id": 0, 00:28:23.590 "state": "INACTIVE", 00:28:23.590 "utilization": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 1, 00:28:23.590 "state": "OPEN", 00:28:23.590 "utilization": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 2, 00:28:23.590 "state": "OPEN", 00:28:23.590 "utilization": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 3, 00:28:23.590 "state": "FREE", 00:28:23.590 "utilization": 0.0 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "id": 4, 00:28:23.590 "state": "FREE", 00:28:23.590 "utilization": 0.0 00:28:23.590 } 00:28:23.590 ], 00:28:23.590 "read-only": true 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "name": "verbose_mode", 00:28:23.590 "value": true, 00:28:23.590 "unit": "", 00:28:23.590 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:23.590 }, 00:28:23.590 { 00:28:23.590 "name": "prep_upgrade_on_shutdown", 00:28:23.590 "value": false, 00:28:23.590 "unit": "", 00:28:23.590 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:23.590 } 00:28:23.590 ] 00:28:23.590 } 00:28:23.590 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:23.590 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:23.590 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:23.851 Validate MD5 checksum, iteration 1 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:23.851 02:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:24.110 [2024-11-04 02:39:11.013134] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:24.110 [2024-11-04 02:39:11.013737] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80900 ] 00:28:24.110 [2024-11-04 02:39:11.184448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.371 [2024-11-04 02:39:11.308331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.755  [2024-11-04T02:39:14.251Z] Copying: 500/1024 [MB] (500 MBps) [2024-11-04T02:39:14.251Z] Copying: 960/1024 [MB] (460 MBps) [2024-11-04T02:39:15.634Z] Copying: 1024/1024 [MB] (average 480 MBps) 00:28:28.523 00:28:28.523 02:39:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:28.523 02:39:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:30.432 Validate MD5 checksum, iteration 2 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b37e3e856fc378715c313f950aee231d 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b37e3e856fc378715c313f950aee231d != \b\3\7\e\3\e\8\5\6\f\c\3\7\8\7\1\5\c\3\1\3\f\9\5\0\a\e\e\2\3\1\d ]] 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:30.432 02:39:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:30.693 [2024-11-04 02:39:17.577174] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:30.693 [2024-11-04 02:39:17.577286] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80973 ] 00:28:30.693 [2024-11-04 02:39:17.736554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.954 [2024-11-04 02:39:17.831354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.342  [2024-11-04T02:39:20.392Z] Copying: 558/1024 [MB] (558 MBps) [2024-11-04T02:39:20.959Z] Copying: 1024/1024 [MB] (average 538 MBps) 00:28:33.848 00:28:33.848 02:39:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:33.848 02:39:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1b5bb72ef51c0e571b8b5f444024af00 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1b5bb72ef51c0e571b8b5f444024af00 != \1\b\5\b\b\7\2\e\f\5\1\c\0\e\5\7\1\b\8\b\5\f\4\4\4\0\2\4\a\f\0\0 ]] 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80806 ]] 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80806 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81034 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:36.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81034 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81034 ']' 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:36.396 02:39:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:36.396 [2024-11-04 02:39:23.121538] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:36.396 [2024-11-04 02:39:23.121821] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81034 ] 00:28:36.396 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 80806 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:36.396 [2024-11-04 02:39:23.278261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.396 [2024-11-04 02:39:23.370811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.968 [2024-11-04 02:39:23.993704] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:36.968 [2024-11-04 02:39:23.993941] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:37.230 [2024-11-04 02:39:24.139819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.230 [2024-11-04 02:39:24.140026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:37.230 [2024-11-04 02:39:24.140103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:37.230 [2024-11-04 02:39:24.140129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.230 [2024-11-04 02:39:24.140202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.230 [2024-11-04 02:39:24.140229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:37.230 [2024-11-04 02:39:24.140250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:37.230 [2024-11-04 02:39:24.140318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.230 [2024-11-04 02:39:24.140366] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:37.230 [2024-11-04 02:39:24.141096] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:37.230 [2024-11-04 02:39:24.141403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.230 [2024-11-04 02:39:24.141422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:37.230 [2024-11-04 02:39:24.141434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.045 ms 00:28:37.230 [2024-11-04 02:39:24.141442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.230 [2024-11-04 02:39:24.141803] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:37.230 [2024-11-04 02:39:24.158368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.230 [2024-11-04 02:39:24.158413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:37.230 [2024-11-04 02:39:24.158425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.566 ms 00:28:37.230 [2024-11-04 02:39:24.158433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.230 [2024-11-04 02:39:24.167490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.230 [2024-11-04 02:39:24.167521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:37.230 [2024-11-04 02:39:24.167534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:37.230 [2024-11-04 02:39:24.167542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.230 [2024-11-04 02:39:24.167860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.230 [2024-11-04 02:39:24.167893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:37.230 [2024-11-04 02:39:24.167902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.234 ms 00:28:37.230 [2024-11-04 02:39:24.167910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.230 [2024-11-04 02:39:24.167958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.230 [2024-11-04 02:39:24.167970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:37.230 [2024-11-04 02:39:24.167978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:28:37.230 [2024-11-04 02:39:24.167986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.230 [2024-11-04 02:39:24.168010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.230 [2024-11-04 02:39:24.168018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:37.230 [2024-11-04 02:39:24.168026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:37.230 [2024-11-04 02:39:24.168033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.230 [2024-11-04 02:39:24.168053] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:37.230 [2024-11-04 02:39:24.171039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.230 [2024-11-04 02:39:24.171066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:37.230 [2024-11-04 02:39:24.171076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.991 ms 00:28:37.230 [2024-11-04 02:39:24.171083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.230 [2024-11-04 02:39:24.171110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.230 [2024-11-04 02:39:24.171121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:37.231 [2024-11-04 02:39:24.171129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:37.231 [2024-11-04 02:39:24.171136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.231 [2024-11-04 02:39:24.171155] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:37.231 [2024-11-04 02:39:24.171173] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:37.231 [2024-11-04 02:39:24.171206] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:37.231 [2024-11-04 02:39:24.171221] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:37.231 [2024-11-04 02:39:24.171324] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:37.231 [2024-11-04 02:39:24.171335] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:37.231 [2024-11-04 02:39:24.171346] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:37.231 [2024-11-04 02:39:24.171355] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:37.231 [2024-11-04 02:39:24.171364] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:37.231 [2024-11-04 02:39:24.171372] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:37.231 [2024-11-04 02:39:24.171379] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:37.231 [2024-11-04 02:39:24.171387] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:37.231 [2024-11-04 02:39:24.171394] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:37.231 [2024-11-04 02:39:24.171401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.231 [2024-11-04 02:39:24.171409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:37.231 [2024-11-04 02:39:24.171419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.248 ms 00:28:37.231 [2024-11-04 02:39:24.171426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.231 [2024-11-04 02:39:24.171509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.231 [2024-11-04 02:39:24.171517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:37.231 [2024-11-04 02:39:24.171525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:28:37.231 [2024-11-04 02:39:24.171532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.231 [2024-11-04 02:39:24.171655] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:37.231 [2024-11-04 02:39:24.171666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:37.231 [2024-11-04 02:39:24.171675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:37.231 [2024-11-04 02:39:24.171685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:37.231 [2024-11-04 02:39:24.171692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:37.231 [2024-11-04 02:39:24.171699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:37.231 [2024-11-04 02:39:24.171707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:37.231 [2024-11-04 02:39:24.171714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:37.231 [2024-11-04 02:39:24.171721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:37.231 [2024-11-04 02:39:24.171728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:37.231 [2024-11-04 02:39:24.171735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:37.231 [2024-11-04 02:39:24.171742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:37.231 [2024-11-04 02:39:24.171749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:37.231 [2024-11-04 02:39:24.171757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:37.231 [2024-11-04 02:39:24.171765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:37.231 [2024-11-04 02:39:24.171771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:37.231 [2024-11-04 02:39:24.171778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:37.231 [2024-11-04 02:39:24.171784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:37.231 [2024-11-04 02:39:24.171790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:37.231 [2024-11-04 02:39:24.171798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:37.231 [2024-11-04 02:39:24.171804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:37.231 [2024-11-04 02:39:24.171810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:37.231 [2024-11-04 02:39:24.171817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:37.231 [2024-11-04 02:39:24.171828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:37.231 [2024-11-04 02:39:24.171834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:37.231 [2024-11-04 02:39:24.171841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:37.231 [2024-11-04 02:39:24.171847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:37.231 [2024-11-04 02:39:24.171853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:37.231 [2024-11-04 02:39:24.171860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:37.231 [2024-11-04 02:39:24.171885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:37.231 [2024-11-04 02:39:24.171892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:37.231 [2024-11-04 02:39:24.171898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:37.231 [2024-11-04 02:39:24.171905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:37.231 [2024-11-04 02:39:24.171913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:37.231 [2024-11-04 02:39:24.171920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:37.231 [2024-11-04 02:39:24.171926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:37.231 [2024-11-04 02:39:24.171933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:37.231 [2024-11-04 02:39:24.171939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:37.231 [2024-11-04 02:39:24.171945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:37.231 [2024-11-04 02:39:24.171952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:37.231 [2024-11-04 02:39:24.171958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:37.231 [2024-11-04 02:39:24.171965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:37.231 [2024-11-04 02:39:24.171972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:37.231 [2024-11-04 02:39:24.171978] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:37.231 [2024-11-04 02:39:24.171986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:37.231 [2024-11-04 02:39:24.171993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:37.231 [2024-11-04 02:39:24.172003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:37.231 [2024-11-04 02:39:24.172012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:37.231 [2024-11-04 02:39:24.172019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:37.231 [2024-11-04 02:39:24.172026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:37.231 [2024-11-04 02:39:24.172033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:37.231 [2024-11-04 02:39:24.172039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:37.231 [2024-11-04 02:39:24.172046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:37.231 [2024-11-04 02:39:24.172055] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:37.231 [2024-11-04 02:39:24.172064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:37.231 [2024-11-04 02:39:24.172073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:37.231 [2024-11-04 02:39:24.172080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:37.231 [2024-11-04 02:39:24.172087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:37.231 [2024-11-04 02:39:24.172093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:37.231 [2024-11-04 02:39:24.172100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:37.231 [2024-11-04 02:39:24.172107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:37.231 [2024-11-04 02:39:24.172114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:37.231 [2024-11-04 02:39:24.172122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:37.231 [2024-11-04 02:39:24.172128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:37.231 [2024-11-04 02:39:24.172135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:37.231 [2024-11-04 02:39:24.172142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:37.231 [2024-11-04 02:39:24.172149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:37.231 [2024-11-04 02:39:24.172156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:37.231 [2024-11-04 02:39:24.172164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:37.231 [2024-11-04 02:39:24.172170] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:37.231 [2024-11-04 02:39:24.172178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:37.231 [2024-11-04 02:39:24.172186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:37.231 [2024-11-04 02:39:24.172193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:37.231 [2024-11-04 02:39:24.172200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:37.231 [2024-11-04 02:39:24.172208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:37.232 [2024-11-04 02:39:24.172214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.172225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:37.232 [2024-11-04 02:39:24.172231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.638 ms 00:28:37.232 [2024-11-04 02:39:24.172242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.196477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.196510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:37.232 [2024-11-04 02:39:24.196520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.185 ms 00:28:37.232 [2024-11-04 02:39:24.196528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.196562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.196570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:37.232 [2024-11-04 02:39:24.196578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:37.232 [2024-11-04 02:39:24.196585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.227499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.227531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:37.232 [2024-11-04 02:39:24.227542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.866 ms 00:28:37.232 [2024-11-04 02:39:24.227550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.227574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.227583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:37.232 [2024-11-04 02:39:24.227591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:37.232 [2024-11-04 02:39:24.227598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.227700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.227711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:37.232 [2024-11-04 02:39:24.227720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:37.232 [2024-11-04 02:39:24.227727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.227765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.227773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:37.232 [2024-11-04 02:39:24.227781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:37.232 [2024-11-04 02:39:24.227788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.242060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.242088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:37.232 [2024-11-04 02:39:24.242098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.251 ms 00:28:37.232 [2024-11-04 02:39:24.242105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.242211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.242221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:37.232 [2024-11-04 02:39:24.242229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:37.232 [2024-11-04 02:39:24.242237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.273226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.273266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:37.232 [2024-11-04 02:39:24.273279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.971 ms 00:28:37.232 [2024-11-04 02:39:24.273287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.282826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.282857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:37.232 [2024-11-04 02:39:24.282881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.516 ms 00:28:37.232 [2024-11-04 02:39:24.282896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.337969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.338014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:37.232 [2024-11-04 02:39:24.338031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.018 ms 00:28:37.232 [2024-11-04 02:39:24.338039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.338161] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:37.232 [2024-11-04 02:39:24.338254] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:37.232 [2024-11-04 02:39:24.338343] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:37.232 [2024-11-04 02:39:24.338433] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:37.232 [2024-11-04 02:39:24.338443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.338451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:37.232 [2024-11-04 02:39:24.338460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:28:37.232 [2024-11-04 02:39:24.338467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.232 [2024-11-04 02:39:24.338518] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:37.232 [2024-11-04 02:39:24.338529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.232 [2024-11-04 02:39:24.338538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:37.232 [2024-11-04 02:39:24.338549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:37.232 [2024-11-04 02:39:24.338557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.494 [2024-11-04 02:39:24.353296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.494 [2024-11-04 02:39:24.353327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:37.494 [2024-11-04 02:39:24.353341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.717 ms 00:28:37.494 [2024-11-04 02:39:24.353349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.494 [2024-11-04 02:39:24.361936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.494 [2024-11-04 02:39:24.361967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:37.494 [2024-11-04 02:39:24.361977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:37.494 [2024-11-04 02:39:24.361986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.494 [2024-11-04 02:39:24.362069] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:37.494 [2024-11-04 02:39:24.362203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.494 [2024-11-04 02:39:24.362216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:37.494 [2024-11-04 02:39:24.362225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.134 ms 00:28:37.494 [2024-11-04 02:39:24.362233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.066 [2024-11-04 02:39:25.049813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.066 [2024-11-04 02:39:25.049914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:38.066 [2024-11-04 02:39:25.049936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 686.723 ms 00:28:38.066 [2024-11-04 02:39:25.049948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.066 [2024-11-04 02:39:25.055051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.066 [2024-11-04 02:39:25.055108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:38.066 [2024-11-04 02:39:25.055121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.587 ms 00:28:38.066 [2024-11-04 02:39:25.055130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.066 [2024-11-04 02:39:25.056366] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:38.066 [2024-11-04 02:39:25.056480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.066 [2024-11-04 02:39:25.056492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:38.066 [2024-11-04 02:39:25.056502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.311 ms 00:28:38.066 [2024-11-04 02:39:25.056511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.066 [2024-11-04 02:39:25.056558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.066 [2024-11-04 02:39:25.056570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:38.066 [2024-11-04 02:39:25.056580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:38.066 [2024-11-04 02:39:25.056589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.066 [2024-11-04 02:39:25.056634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 694.556 ms, result 0 00:28:38.066 [2024-11-04 02:39:25.056682] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:38.066 [2024-11-04 02:39:25.057020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.066 [2024-11-04 02:39:25.057040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:38.066 [2024-11-04 02:39:25.057051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.339 ms 00:28:38.066 [2024-11-04 02:39:25.057059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.638 [2024-11-04 02:39:25.732638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.638 [2024-11-04 02:39:25.732701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:38.638 [2024-11-04 02:39:25.732717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 674.242 ms 00:28:38.638 [2024-11-04 02:39:25.732726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.638 [2024-11-04 02:39:25.737676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.638 [2024-11-04 02:39:25.737729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:38.638 [2024-11-04 02:39:25.737741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.819 ms 00:28:38.638 [2024-11-04 02:39:25.737750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.638 [2024-11-04 02:39:25.738838] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:38.638 [2024-11-04 02:39:25.738906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.638 [2024-11-04 02:39:25.738916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:38.638 [2024-11-04 02:39:25.738927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.135 ms 00:28:38.638 [2024-11-04 02:39:25.738935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.638 [2024-11-04 02:39:25.738978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.638 [2024-11-04 02:39:25.738989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:38.638 [2024-11-04 02:39:25.738999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:38.638 [2024-11-04 02:39:25.739008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.638 [2024-11-04 02:39:25.739048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 682.362 ms, result 0 00:28:38.638 [2024-11-04 02:39:25.739096] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:38.638 [2024-11-04 02:39:25.739107] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:38.638 [2024-11-04 02:39:25.739118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.638 [2024-11-04 02:39:25.739127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:38.638 [2024-11-04 02:39:25.739135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1377.065 ms 00:28:38.638 [2024-11-04 02:39:25.739143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.638 [2024-11-04 02:39:25.739174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.638 [2024-11-04 02:39:25.739184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:38.638 [2024-11-04 02:39:25.739196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:38.638 [2024-11-04 02:39:25.739206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.901 [2024-11-04 02:39:25.753256] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:38.901 [2024-11-04 02:39:25.753553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.901 [2024-11-04 02:39:25.753591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:38.901 [2024-11-04 02:39:25.753674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.329 ms 00:28:38.901 [2024-11-04 02:39:25.753741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.901 [2024-11-04 02:39:25.754529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.901 [2024-11-04 02:39:25.754584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:38.901 [2024-11-04 02:39:25.754677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.670 ms 00:28:38.901 [2024-11-04 02:39:25.754708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.901 [2024-11-04 02:39:25.757213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.901 [2024-11-04 02:39:25.757351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:38.901 [2024-11-04 02:39:25.757415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.467 ms 00:28:38.901 [2024-11-04 02:39:25.757438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.901 [2024-11-04 02:39:25.757503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.901 [2024-11-04 02:39:25.757524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:38.901 [2024-11-04 02:39:25.757547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:38.901 [2024-11-04 02:39:25.757567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.901 [2024-11-04 02:39:25.757765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.901 [2024-11-04 02:39:25.757979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:38.901 [2024-11-04 02:39:25.758080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:38.901 [2024-11-04 02:39:25.758095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.901 [2024-11-04 02:39:25.758135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.901 [2024-11-04 02:39:25.758145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:38.901 [2024-11-04 02:39:25.758154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:38.901 [2024-11-04 02:39:25.758161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.901 [2024-11-04 02:39:25.758223] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:38.901 [2024-11-04 02:39:25.758239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.901 [2024-11-04 02:39:25.758246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:38.901 [2024-11-04 02:39:25.758255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:38.901 [2024-11-04 02:39:25.758263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.901 [2024-11-04 02:39:25.758324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.901 [2024-11-04 02:39:25.758333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:38.901 [2024-11-04 02:39:25.758342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:38.901 [2024-11-04 02:39:25.758350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.901 [2024-11-04 02:39:25.760014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1619.542 ms, result 0 00:28:38.901 [2024-11-04 02:39:25.774438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.901 [2024-11-04 02:39:25.790413] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:38.901 [2024-11-04 02:39:25.799747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:38.901 Validate MD5 checksum, iteration 1 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:38.901 02:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:38.901 [2024-11-04 02:39:25.908120] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:38.901 [2024-11-04 02:39:25.908357] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81069 ] 00:28:39.162 [2024-11-04 02:39:26.067862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.162 [2024-11-04 02:39:26.163223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.609  [2024-11-04T02:39:28.656Z] Copying: 553/1024 [MB] (553 MBps) [2024-11-04T02:39:32.856Z] Copying: 1024/1024 [MB] (average 619 MBps) 00:28:45.745 00:28:45.745 02:39:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:45.745 02:39:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b37e3e856fc378715c313f950aee231d 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b37e3e856fc378715c313f950aee231d != \b\3\7\e\3\e\8\5\6\f\c\3\7\8\7\1\5\c\3\1\3\f\9\5\0\a\e\e\2\3\1\d ]] 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:47.125 Validate MD5 checksum, iteration 2 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:47.125 02:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:47.384 [2024-11-04 02:39:34.296198] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:47.385 [2024-11-04 02:39:34.296451] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81162 ] 00:28:47.385 [2024-11-04 02:39:34.449434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.643 [2024-11-04 02:39:34.525479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.017  [2024-11-04T02:39:36.388Z] Copying: 711/1024 [MB] (711 MBps) [2024-11-04T02:39:38.928Z] Copying: 1024/1024 [MB] (average 722 MBps) 00:28:51.817 00:28:51.817 02:39:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:51.817 02:39:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1b5bb72ef51c0e571b8b5f444024af00 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1b5bb72ef51c0e571b8b5f444024af00 != \1\b\5\b\b\7\2\e\f\5\1\c\0\e\5\7\1\b\8\b\5\f\4\4\4\0\2\4\a\f\0\0 ]] 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81034 ]] 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81034 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81034 ']' 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81034 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81034 00:28:53.726 killing process with pid 81034 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81034' 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81034 00:28:53.726 02:39:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81034 00:28:54.297 [2024-11-04 02:39:41.161942] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:54.297 [2024-11-04 02:39:41.174193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.297 [2024-11-04 02:39:41.174226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:54.297 [2024-11-04 02:39:41.174237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:54.297 [2024-11-04 02:39:41.174245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.297 [2024-11-04 02:39:41.174263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:54.297 [2024-11-04 02:39:41.176578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.297 [2024-11-04 02:39:41.176598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:54.297 [2024-11-04 02:39:41.176607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.306 ms 00:28:54.297 [2024-11-04 02:39:41.176618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.297 [2024-11-04 02:39:41.176802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.297 [2024-11-04 02:39:41.176811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:54.297 [2024-11-04 02:39:41.176818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.167 ms 00:28:54.297 [2024-11-04 02:39:41.176824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.297 [2024-11-04 02:39:41.178027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.297 [2024-11-04 02:39:41.178059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:54.297 [2024-11-04 02:39:41.178187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.190 ms 00:28:54.297 [2024-11-04 02:39:41.178207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.297 [2024-11-04 02:39:41.179113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.298 [2024-11-04 02:39:41.179184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:54.298 [2024-11-04 02:39:41.179228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.868 ms 00:28:54.298 [2024-11-04 02:39:41.179246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.186951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.298 [2024-11-04 02:39:41.187042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:54.298 [2024-11-04 02:39:41.187088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.665 ms 00:28:54.298 [2024-11-04 02:39:41.187106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.191481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.298 [2024-11-04 02:39:41.191508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:54.298 [2024-11-04 02:39:41.191516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.338 ms 00:28:54.298 [2024-11-04 02:39:41.191523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.191672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.298 [2024-11-04 02:39:41.191682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:54.298 [2024-11-04 02:39:41.191689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:54.298 [2024-11-04 02:39:41.191695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.199009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.298 [2024-11-04 02:39:41.199031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:54.298 [2024-11-04 02:39:41.199038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.303 ms 00:28:54.298 [2024-11-04 02:39:41.199044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.205859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.298 [2024-11-04 02:39:41.205890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:54.298 [2024-11-04 02:39:41.205896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.791 ms 00:28:54.298 [2024-11-04 02:39:41.205902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.212726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.298 [2024-11-04 02:39:41.212749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:54.298 [2024-11-04 02:39:41.212756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.800 ms 00:28:54.298 [2024-11-04 02:39:41.212761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.219584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.298 [2024-11-04 02:39:41.219608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:54.298 [2024-11-04 02:39:41.219628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.778 ms 00:28:54.298 [2024-11-04 02:39:41.219634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.219658] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:54.298 [2024-11-04 02:39:41.219673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:54.298 [2024-11-04 02:39:41.219681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:54.298 [2024-11-04 02:39:41.219688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:54.298 [2024-11-04 02:39:41.219694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:54.298 [2024-11-04 02:39:41.219784] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:54.298 [2024-11-04 02:39:41.219790] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c4d73512-48c8-43f4-a384-6e9f2bccccb1 00:28:54.298 [2024-11-04 02:39:41.219796] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:54.298 [2024-11-04 02:39:41.219801] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:54.298 [2024-11-04 02:39:41.219807] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:54.298 [2024-11-04 02:39:41.219812] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:54.298 [2024-11-04 02:39:41.219818] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:54.298 [2024-11-04 02:39:41.219824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:54.298 [2024-11-04 02:39:41.219829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:54.298 [2024-11-04 02:39:41.219834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:54.298 [2024-11-04 02:39:41.219839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:54.298 [2024-11-04 02:39:41.219845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.298 [2024-11-04 02:39:41.219851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:54.298 [2024-11-04 02:39:41.219859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:28:54.298 [2024-11-04 02:39:41.219873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.230013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.298 [2024-11-04 02:39:41.230097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:54.298 [2024-11-04 02:39:41.230137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.126 ms 00:28:54.298 [2024-11-04 02:39:41.230154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.230452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.298 [2024-11-04 02:39:41.230508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:54.298 [2024-11-04 02:39:41.230546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:28:54.298 [2024-11-04 02:39:41.230564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.265454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.298 [2024-11-04 02:39:41.265548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:54.298 [2024-11-04 02:39:41.265588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.298 [2024-11-04 02:39:41.265606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.265641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.298 [2024-11-04 02:39:41.265678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:54.298 [2024-11-04 02:39:41.265696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.298 [2024-11-04 02:39:41.265711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.265805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.298 [2024-11-04 02:39:41.265827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:54.298 [2024-11-04 02:39:41.265843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.298 [2024-11-04 02:39:41.265857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.265897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.298 [2024-11-04 02:39:41.265914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:54.298 [2024-11-04 02:39:41.265933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.298 [2024-11-04 02:39:41.265947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.328106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.298 [2024-11-04 02:39:41.328214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:54.298 [2024-11-04 02:39:41.328254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.298 [2024-11-04 02:39:41.328272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.378676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.298 [2024-11-04 02:39:41.378800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:54.298 [2024-11-04 02:39:41.378839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.298 [2024-11-04 02:39:41.378856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.298 [2024-11-04 02:39:41.378953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.298 [2024-11-04 02:39:41.378974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:54.298 [2024-11-04 02:39:41.378990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.298 [2024-11-04 02:39:41.379005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.299 [2024-11-04 02:39:41.379069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.299 [2024-11-04 02:39:41.379091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:54.299 [2024-11-04 02:39:41.379107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.299 [2024-11-04 02:39:41.379160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.299 [2024-11-04 02:39:41.379280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.299 [2024-11-04 02:39:41.379302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:54.299 [2024-11-04 02:39:41.379340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.299 [2024-11-04 02:39:41.379357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.299 [2024-11-04 02:39:41.379398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.299 [2024-11-04 02:39:41.379444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:54.299 [2024-11-04 02:39:41.379462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.299 [2024-11-04 02:39:41.379477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.299 [2024-11-04 02:39:41.379562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.299 [2024-11-04 02:39:41.379582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:54.299 [2024-11-04 02:39:41.379597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.299 [2024-11-04 02:39:41.379620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.299 [2024-11-04 02:39:41.379672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.299 [2024-11-04 02:39:41.379806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:54.299 [2024-11-04 02:39:41.379823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.299 [2024-11-04 02:39:41.379842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.299 [2024-11-04 02:39:41.379971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 205.748 ms, result 0 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:55.241 Remove shared memory files 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80806 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:55.241 02:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:55.242 ************************************ 00:28:55.242 END TEST ftl_upgrade_shutdown 00:28:55.242 ************************************ 00:28:55.242 00:28:55.242 real 1m29.082s 00:28:55.242 user 1m59.710s 00:28:55.242 sys 0m20.665s 00:28:55.242 02:39:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:55.242 02:39:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:55.242 02:39:42 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:55.242 02:39:42 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:55.242 02:39:42 ftl -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:28:55.242 02:39:42 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:55.242 02:39:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:55.503 ************************************ 00:28:55.503 START TEST ftl_restore_fast 00:28:55.503 ************************************ 00:28:55.503 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:55.503 * Looking for test storage... 00:28:55.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:55.503 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:55.503 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lcov --version 00:28:55.503 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:55.503 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:55.503 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.503 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.503 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.503 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.503 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:55.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.504 --rc genhtml_branch_coverage=1 00:28:55.504 --rc genhtml_function_coverage=1 00:28:55.504 --rc genhtml_legend=1 00:28:55.504 --rc geninfo_all_blocks=1 00:28:55.504 --rc geninfo_unexecuted_blocks=1 00:28:55.504 00:28:55.504 ' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:55.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.504 --rc genhtml_branch_coverage=1 00:28:55.504 --rc genhtml_function_coverage=1 00:28:55.504 --rc genhtml_legend=1 00:28:55.504 --rc geninfo_all_blocks=1 00:28:55.504 --rc geninfo_unexecuted_blocks=1 00:28:55.504 00:28:55.504 ' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:55.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.504 --rc genhtml_branch_coverage=1 00:28:55.504 --rc genhtml_function_coverage=1 00:28:55.504 --rc genhtml_legend=1 00:28:55.504 --rc geninfo_all_blocks=1 00:28:55.504 --rc geninfo_unexecuted_blocks=1 00:28:55.504 00:28:55.504 ' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:55.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.504 --rc genhtml_branch_coverage=1 00:28:55.504 --rc genhtml_function_coverage=1 00:28:55.504 --rc genhtml_legend=1 00:28:55.504 --rc geninfo_all_blocks=1 00:28:55.504 --rc geninfo_unexecuted_blocks=1 00:28:55.504 00:28:55.504 ' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.DooAycVtLa 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=81323 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 81323 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # '[' -z 81323 ']' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:55.504 02:39:42 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:55.765 [2024-11-04 02:39:42.636015] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:28:55.765 [2024-11-04 02:39:42.636318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81323 ] 00:28:55.765 [2024-11-04 02:39:42.798681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.026 [2024-11-04 02:39:42.923110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.597 02:39:43 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:56.597 02:39:43 ftl.ftl_restore_fast -- common/autotest_common.sh@866 -- # return 0 00:28:56.597 02:39:43 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:56.597 02:39:43 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:56.597 02:39:43 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:56.597 02:39:43 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:56.597 02:39:43 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:56.597 02:39:43 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:56.859 02:39:43 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:56.859 02:39:43 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:56.859 02:39:43 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:56.859 02:39:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:28:56.859 02:39:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:56.859 02:39:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:28:56.859 02:39:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:28:56.859 02:39:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:57.120 { 00:28:57.120 "name": "nvme0n1", 00:28:57.120 "aliases": [ 00:28:57.120 "72902b3a-4975-44c2-a43b-210843cae9e4" 00:28:57.120 ], 00:28:57.120 "product_name": "NVMe disk", 00:28:57.120 "block_size": 4096, 00:28:57.120 "num_blocks": 1310720, 00:28:57.120 "uuid": "72902b3a-4975-44c2-a43b-210843cae9e4", 00:28:57.120 "numa_id": -1, 00:28:57.120 "assigned_rate_limits": { 00:28:57.120 "rw_ios_per_sec": 0, 00:28:57.120 "rw_mbytes_per_sec": 0, 00:28:57.120 "r_mbytes_per_sec": 0, 00:28:57.120 "w_mbytes_per_sec": 0 00:28:57.120 }, 00:28:57.120 "claimed": true, 00:28:57.120 "claim_type": "read_many_write_one", 00:28:57.120 "zoned": false, 00:28:57.120 "supported_io_types": { 00:28:57.120 "read": true, 00:28:57.120 "write": true, 00:28:57.120 "unmap": true, 00:28:57.120 "flush": true, 00:28:57.120 "reset": true, 00:28:57.120 "nvme_admin": true, 00:28:57.120 "nvme_io": true, 00:28:57.120 "nvme_io_md": false, 00:28:57.120 "write_zeroes": true, 00:28:57.120 "zcopy": false, 00:28:57.120 "get_zone_info": false, 00:28:57.120 "zone_management": false, 00:28:57.120 "zone_append": false, 00:28:57.120 "compare": true, 00:28:57.120 "compare_and_write": false, 00:28:57.120 "abort": true, 00:28:57.120 "seek_hole": false, 00:28:57.120 "seek_data": false, 00:28:57.120 "copy": true, 00:28:57.120 "nvme_iov_md": false 00:28:57.120 }, 00:28:57.120 "driver_specific": { 00:28:57.120 "nvme": [ 00:28:57.120 { 00:28:57.120 "pci_address": "0000:00:11.0", 00:28:57.120 "trid": { 00:28:57.120 "trtype": "PCIe", 00:28:57.120 "traddr": "0000:00:11.0" 00:28:57.120 }, 00:28:57.120 "ctrlr_data": { 00:28:57.120 "cntlid": 0, 00:28:57.120 "vendor_id": "0x1b36", 00:28:57.120 "model_number": "QEMU NVMe Ctrl", 00:28:57.120 "serial_number": "12341", 00:28:57.120 "firmware_revision": "8.0.0", 00:28:57.120 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:57.120 "oacs": { 00:28:57.120 "security": 0, 00:28:57.120 "format": 1, 00:28:57.120 "firmware": 0, 00:28:57.120 "ns_manage": 1 00:28:57.120 }, 00:28:57.120 "multi_ctrlr": false, 00:28:57.120 "ana_reporting": false 00:28:57.120 }, 00:28:57.120 "vs": { 00:28:57.120 "nvme_version": "1.4" 00:28:57.120 }, 00:28:57.120 "ns_data": { 00:28:57.120 "id": 1, 00:28:57.120 "can_share": false 00:28:57.120 } 00:28:57.120 } 00:28:57.120 ], 00:28:57.120 "mp_policy": "active_passive" 00:28:57.120 } 00:28:57.120 } 00:28:57.120 ]' 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=1310720 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 5120 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:57.120 02:39:44 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:57.381 02:39:44 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=04139210-73a0-4e31-8bb4-4f3ec098db32 00:28:57.381 02:39:44 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:57.381 02:39:44 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 04139210-73a0-4e31-8bb4-4f3ec098db32 00:28:57.642 02:39:44 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:57.901 02:39:44 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=53d8121d-df7f-4fb3-b9a1-5b777512c1d9 00:28:57.901 02:39:44 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 53d8121d-df7f-4fb3-b9a1-5b777512c1d9 00:28:58.161 02:39:45 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:58.162 { 00:28:58.162 "name": "9b6a47f8-e7e3-4565-a6b7-944359a2dfa6", 00:28:58.162 "aliases": [ 00:28:58.162 "lvs/nvme0n1p0" 00:28:58.162 ], 00:28:58.162 "product_name": "Logical Volume", 00:28:58.162 "block_size": 4096, 00:28:58.162 "num_blocks": 26476544, 00:28:58.162 "uuid": "9b6a47f8-e7e3-4565-a6b7-944359a2dfa6", 00:28:58.162 "assigned_rate_limits": { 00:28:58.162 "rw_ios_per_sec": 0, 00:28:58.162 "rw_mbytes_per_sec": 0, 00:28:58.162 "r_mbytes_per_sec": 0, 00:28:58.162 "w_mbytes_per_sec": 0 00:28:58.162 }, 00:28:58.162 "claimed": false, 00:28:58.162 "zoned": false, 00:28:58.162 "supported_io_types": { 00:28:58.162 "read": true, 00:28:58.162 "write": true, 00:28:58.162 "unmap": true, 00:28:58.162 "flush": false, 00:28:58.162 "reset": true, 00:28:58.162 "nvme_admin": false, 00:28:58.162 "nvme_io": false, 00:28:58.162 "nvme_io_md": false, 00:28:58.162 "write_zeroes": true, 00:28:58.162 "zcopy": false, 00:28:58.162 "get_zone_info": false, 00:28:58.162 "zone_management": false, 00:28:58.162 "zone_append": false, 00:28:58.162 "compare": false, 00:28:58.162 "compare_and_write": false, 00:28:58.162 "abort": false, 00:28:58.162 "seek_hole": true, 00:28:58.162 "seek_data": true, 00:28:58.162 "copy": false, 00:28:58.162 "nvme_iov_md": false 00:28:58.162 }, 00:28:58.162 "driver_specific": { 00:28:58.162 "lvol": { 00:28:58.162 "lvol_store_uuid": "53d8121d-df7f-4fb3-b9a1-5b777512c1d9", 00:28:58.162 "base_bdev": "nvme0n1", 00:28:58.162 "thin_provision": true, 00:28:58.162 "num_allocated_clusters": 0, 00:28:58.162 "snapshot": false, 00:28:58.162 "clone": false, 00:28:58.162 "esnap_clone": false 00:28:58.162 } 00:28:58.162 } 00:28:58.162 } 00:28:58.162 ]' 00:28:58.162 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:58.421 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:28:58.421 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:58.421 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:28:58.421 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:28:58.421 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:28:58.421 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:58.421 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:58.421 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:58.701 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:58.701 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:58.701 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:58.701 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:58.701 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:58.701 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:28:58.701 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:28:58.701 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:58.701 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:58.701 { 00:28:58.701 "name": "9b6a47f8-e7e3-4565-a6b7-944359a2dfa6", 00:28:58.701 "aliases": [ 00:28:58.701 "lvs/nvme0n1p0" 00:28:58.701 ], 00:28:58.701 "product_name": "Logical Volume", 00:28:58.701 "block_size": 4096, 00:28:58.701 "num_blocks": 26476544, 00:28:58.701 "uuid": "9b6a47f8-e7e3-4565-a6b7-944359a2dfa6", 00:28:58.701 "assigned_rate_limits": { 00:28:58.701 "rw_ios_per_sec": 0, 00:28:58.701 "rw_mbytes_per_sec": 0, 00:28:58.701 "r_mbytes_per_sec": 0, 00:28:58.701 "w_mbytes_per_sec": 0 00:28:58.701 }, 00:28:58.701 "claimed": false, 00:28:58.701 "zoned": false, 00:28:58.701 "supported_io_types": { 00:28:58.701 "read": true, 00:28:58.702 "write": true, 00:28:58.702 "unmap": true, 00:28:58.702 "flush": false, 00:28:58.702 "reset": true, 00:28:58.702 "nvme_admin": false, 00:28:58.702 "nvme_io": false, 00:28:58.702 "nvme_io_md": false, 00:28:58.702 "write_zeroes": true, 00:28:58.702 "zcopy": false, 00:28:58.702 "get_zone_info": false, 00:28:58.702 "zone_management": false, 00:28:58.702 "zone_append": false, 00:28:58.702 "compare": false, 00:28:58.702 "compare_and_write": false, 00:28:58.702 "abort": false, 00:28:58.702 "seek_hole": true, 00:28:58.702 "seek_data": true, 00:28:58.702 "copy": false, 00:28:58.702 "nvme_iov_md": false 00:28:58.702 }, 00:28:58.702 "driver_specific": { 00:28:58.702 "lvol": { 00:28:58.702 "lvol_store_uuid": "53d8121d-df7f-4fb3-b9a1-5b777512c1d9", 00:28:58.702 "base_bdev": "nvme0n1", 00:28:58.702 "thin_provision": true, 00:28:58.702 "num_allocated_clusters": 0, 00:28:58.702 "snapshot": false, 00:28:58.702 "clone": false, 00:28:58.702 "esnap_clone": false 00:28:58.702 } 00:28:58.702 } 00:28:58.702 } 00:28:58.702 ]' 00:28:58.702 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:58.702 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:28:58.702 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bdev_name=9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bs 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local nb 00:28:58.965 02:39:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:59.223 { 00:28:59.223 "name": "9b6a47f8-e7e3-4565-a6b7-944359a2dfa6", 00:28:59.223 "aliases": [ 00:28:59.223 "lvs/nvme0n1p0" 00:28:59.223 ], 00:28:59.223 "product_name": "Logical Volume", 00:28:59.223 "block_size": 4096, 00:28:59.223 "num_blocks": 26476544, 00:28:59.223 "uuid": "9b6a47f8-e7e3-4565-a6b7-944359a2dfa6", 00:28:59.223 "assigned_rate_limits": { 00:28:59.223 "rw_ios_per_sec": 0, 00:28:59.223 "rw_mbytes_per_sec": 0, 00:28:59.223 "r_mbytes_per_sec": 0, 00:28:59.223 "w_mbytes_per_sec": 0 00:28:59.223 }, 00:28:59.223 "claimed": false, 00:28:59.223 "zoned": false, 00:28:59.223 "supported_io_types": { 00:28:59.223 "read": true, 00:28:59.223 "write": true, 00:28:59.223 "unmap": true, 00:28:59.223 "flush": false, 00:28:59.223 "reset": true, 00:28:59.223 "nvme_admin": false, 00:28:59.223 "nvme_io": false, 00:28:59.223 "nvme_io_md": false, 00:28:59.223 "write_zeroes": true, 00:28:59.223 "zcopy": false, 00:28:59.223 "get_zone_info": false, 00:28:59.223 "zone_management": false, 00:28:59.223 "zone_append": false, 00:28:59.223 "compare": false, 00:28:59.223 "compare_and_write": false, 00:28:59.223 "abort": false, 00:28:59.223 "seek_hole": true, 00:28:59.223 "seek_data": true, 00:28:59.223 "copy": false, 00:28:59.223 "nvme_iov_md": false 00:28:59.223 }, 00:28:59.223 "driver_specific": { 00:28:59.223 "lvol": { 00:28:59.223 "lvol_store_uuid": "53d8121d-df7f-4fb3-b9a1-5b777512c1d9", 00:28:59.223 "base_bdev": "nvme0n1", 00:28:59.223 "thin_provision": true, 00:28:59.223 "num_allocated_clusters": 0, 00:28:59.223 "snapshot": false, 00:28:59.223 "clone": false, 00:28:59.223 "esnap_clone": false 00:28:59.223 } 00:28:59.223 } 00:28:59.223 } 00:28:59.223 ]' 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # bs=4096 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # nb=26476544 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1390 -- # echo 103424 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 --l2p_dram_limit 10' 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:59.223 02:39:46 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:59.224 02:39:46 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9b6a47f8-e7e3-4565-a6b7-944359a2dfa6 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:59.485 [2024-11-04 02:39:46.449182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.485 [2024-11-04 02:39:46.449222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:59.485 [2024-11-04 02:39:46.449234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:59.485 [2024-11-04 02:39:46.449242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.485 [2024-11-04 02:39:46.449286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.485 [2024-11-04 02:39:46.449293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:59.485 [2024-11-04 02:39:46.449301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:59.485 [2024-11-04 02:39:46.449307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.485 [2024-11-04 02:39:46.449327] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:59.485 [2024-11-04 02:39:46.449898] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:59.485 [2024-11-04 02:39:46.449917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.485 [2024-11-04 02:39:46.449923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:59.485 [2024-11-04 02:39:46.449933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:28:59.485 [2024-11-04 02:39:46.449939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.485 [2024-11-04 02:39:46.449966] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f95c5d21-0b04-45bd-8eaf-594cb18bd776 00:28:59.485 [2024-11-04 02:39:46.450960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.485 [2024-11-04 02:39:46.450986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:59.485 [2024-11-04 02:39:46.450993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:59.485 [2024-11-04 02:39:46.451001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.485 [2024-11-04 02:39:46.455756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.485 [2024-11-04 02:39:46.455788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:59.485 [2024-11-04 02:39:46.455796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.699 ms 00:28:59.485 [2024-11-04 02:39:46.455805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.485 [2024-11-04 02:39:46.455883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.485 [2024-11-04 02:39:46.455893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:59.485 [2024-11-04 02:39:46.455899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:59.485 [2024-11-04 02:39:46.455909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.485 [2024-11-04 02:39:46.455944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.485 [2024-11-04 02:39:46.455954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:59.485 [2024-11-04 02:39:46.455961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:59.485 [2024-11-04 02:39:46.455967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.485 [2024-11-04 02:39:46.455985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:59.485 [2024-11-04 02:39:46.458813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.485 [2024-11-04 02:39:46.458942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:59.485 [2024-11-04 02:39:46.458958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:28:59.485 [2024-11-04 02:39:46.458968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.485 [2024-11-04 02:39:46.458998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.485 [2024-11-04 02:39:46.459004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:59.485 [2024-11-04 02:39:46.459012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:59.485 [2024-11-04 02:39:46.459018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.485 [2024-11-04 02:39:46.459037] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:59.485 [2024-11-04 02:39:46.459142] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:59.485 [2024-11-04 02:39:46.459155] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:59.485 [2024-11-04 02:39:46.459164] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:59.485 [2024-11-04 02:39:46.459173] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:59.485 [2024-11-04 02:39:46.459179] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:59.485 [2024-11-04 02:39:46.459187] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:59.485 [2024-11-04 02:39:46.459192] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:59.485 [2024-11-04 02:39:46.459200] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:59.485 [2024-11-04 02:39:46.459205] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:59.485 [2024-11-04 02:39:46.459214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.485 [2024-11-04 02:39:46.459220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:59.485 [2024-11-04 02:39:46.459227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:28:59.485 [2024-11-04 02:39:46.459238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.485 [2024-11-04 02:39:46.459303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.486 [2024-11-04 02:39:46.459311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:59.486 [2024-11-04 02:39:46.459318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:59.486 [2024-11-04 02:39:46.459323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.486 [2024-11-04 02:39:46.459398] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:59.486 [2024-11-04 02:39:46.459408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:59.486 [2024-11-04 02:39:46.459415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:59.486 [2024-11-04 02:39:46.459421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:59.486 [2024-11-04 02:39:46.459433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:59.486 [2024-11-04 02:39:46.459445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:59.486 [2024-11-04 02:39:46.459452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:59.486 [2024-11-04 02:39:46.459466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:59.486 [2024-11-04 02:39:46.459472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:59.486 [2024-11-04 02:39:46.459479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:59.486 [2024-11-04 02:39:46.459484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:59.486 [2024-11-04 02:39:46.459491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:59.486 [2024-11-04 02:39:46.459496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:59.486 [2024-11-04 02:39:46.459509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:59.486 [2024-11-04 02:39:46.459516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:59.486 [2024-11-04 02:39:46.459528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.486 [2024-11-04 02:39:46.459540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:59.486 [2024-11-04 02:39:46.459546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.486 [2024-11-04 02:39:46.459557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:59.486 [2024-11-04 02:39:46.459563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.486 [2024-11-04 02:39:46.459575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:59.486 [2024-11-04 02:39:46.459580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.486 [2024-11-04 02:39:46.459591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:59.486 [2024-11-04 02:39:46.459599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:59.486 [2024-11-04 02:39:46.459627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:59.486 [2024-11-04 02:39:46.459633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:59.486 [2024-11-04 02:39:46.459639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:59.486 [2024-11-04 02:39:46.459644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:59.486 [2024-11-04 02:39:46.459650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:59.486 [2024-11-04 02:39:46.459655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:59.486 [2024-11-04 02:39:46.459667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:59.486 [2024-11-04 02:39:46.459674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459679] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:59.486 [2024-11-04 02:39:46.459687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:59.486 [2024-11-04 02:39:46.459692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:59.486 [2024-11-04 02:39:46.459699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.486 [2024-11-04 02:39:46.459706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:59.486 [2024-11-04 02:39:46.459715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:59.486 [2024-11-04 02:39:46.459720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:59.486 [2024-11-04 02:39:46.459727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:59.486 [2024-11-04 02:39:46.459732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:59.486 [2024-11-04 02:39:46.459739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:59.486 [2024-11-04 02:39:46.459747] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:59.486 [2024-11-04 02:39:46.459756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:59.486 [2024-11-04 02:39:46.459763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:59.486 [2024-11-04 02:39:46.459771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:59.486 [2024-11-04 02:39:46.459776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:59.486 [2024-11-04 02:39:46.459783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:59.486 [2024-11-04 02:39:46.459788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:59.486 [2024-11-04 02:39:46.459795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:59.486 [2024-11-04 02:39:46.459800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:59.486 [2024-11-04 02:39:46.459807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:59.486 [2024-11-04 02:39:46.459812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:59.486 [2024-11-04 02:39:46.459820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:59.486 [2024-11-04 02:39:46.459825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:59.486 [2024-11-04 02:39:46.459832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:59.486 [2024-11-04 02:39:46.459837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:59.486 [2024-11-04 02:39:46.459845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:59.486 [2024-11-04 02:39:46.459851] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:59.486 [2024-11-04 02:39:46.459858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:59.486 [2024-11-04 02:39:46.459882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:59.486 [2024-11-04 02:39:46.459890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:59.486 [2024-11-04 02:39:46.459895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:59.486 [2024-11-04 02:39:46.459903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:59.486 [2024-11-04 02:39:46.459910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.486 [2024-11-04 02:39:46.459917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:59.486 [2024-11-04 02:39:46.459924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:28:59.486 [2024-11-04 02:39:46.459931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.486 [2024-11-04 02:39:46.459974] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:59.486 [2024-11-04 02:39:46.459985] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:03.686 [2024-11-04 02:39:50.100526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.686 [2024-11-04 02:39:50.100617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:03.686 [2024-11-04 02:39:50.100636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3640.536 ms 00:29:03.686 [2024-11-04 02:39:50.100648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.686 [2024-11-04 02:39:50.132177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.686 [2024-11-04 02:39:50.132420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:03.686 [2024-11-04 02:39:50.132442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.071 ms 00:29:03.686 [2024-11-04 02:39:50.132456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.686 [2024-11-04 02:39:50.132601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.686 [2024-11-04 02:39:50.132615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:03.686 [2024-11-04 02:39:50.132625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:29:03.686 [2024-11-04 02:39:50.132637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.686 [2024-11-04 02:39:50.162441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.686 [2024-11-04 02:39:50.162616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.686 [2024-11-04 02:39:50.162634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.752 ms 00:29:03.686 [2024-11-04 02:39:50.162646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.686 [2024-11-04 02:39:50.162681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.686 [2024-11-04 02:39:50.162690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.686 [2024-11-04 02:39:50.162697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:03.686 [2024-11-04 02:39:50.162707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.686 [2024-11-04 02:39:50.163214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.686 [2024-11-04 02:39:50.163242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.686 [2024-11-04 02:39:50.163250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:29:03.686 [2024-11-04 02:39:50.163260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.686 [2024-11-04 02:39:50.163352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.686 [2024-11-04 02:39:50.163363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.686 [2024-11-04 02:39:50.163372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:03.686 [2024-11-04 02:39:50.163382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.686 [2024-11-04 02:39:50.176799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.686 [2024-11-04 02:39:50.176839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.686 [2024-11-04 02:39:50.176848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.400 ms 00:29:03.686 [2024-11-04 02:39:50.176858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.686 [2024-11-04 02:39:50.186592] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:03.687 [2024-11-04 02:39:50.189435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.189579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:03.687 [2024-11-04 02:39:50.189598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.488 ms 00:29:03.687 [2024-11-04 02:39:50.189605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.274471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.274509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:03.687 [2024-11-04 02:39:50.274523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.839 ms 00:29:03.687 [2024-11-04 02:39:50.274530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.274675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.274685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:03.687 [2024-11-04 02:39:50.274695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:29:03.687 [2024-11-04 02:39:50.274703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.293031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.293166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:03.687 [2024-11-04 02:39:50.293184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.291 ms 00:29:03.687 [2024-11-04 02:39:50.293191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.310700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.310725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:03.687 [2024-11-04 02:39:50.310735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.478 ms 00:29:03.687 [2024-11-04 02:39:50.310741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.311190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.311200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:03.687 [2024-11-04 02:39:50.311208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:29:03.687 [2024-11-04 02:39:50.311214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.370676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.370703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:03.687 [2024-11-04 02:39:50.370715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.435 ms 00:29:03.687 [2024-11-04 02:39:50.370722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.389854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.389897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:03.687 [2024-11-04 02:39:50.389910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.087 ms 00:29:03.687 [2024-11-04 02:39:50.389917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.407984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.408082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:03.687 [2024-11-04 02:39:50.408096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.048 ms 00:29:03.687 [2024-11-04 02:39:50.408102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.426809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.426833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:03.687 [2024-11-04 02:39:50.426842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.686 ms 00:29:03.687 [2024-11-04 02:39:50.426848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.426884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.426891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:03.687 [2024-11-04 02:39:50.426901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:03.687 [2024-11-04 02:39:50.426907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.426965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.687 [2024-11-04 02:39:50.426972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:03.687 [2024-11-04 02:39:50.426980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:03.687 [2024-11-04 02:39:50.426986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.687 [2024-11-04 02:39:50.427673] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3978.151 ms, result 0 00:29:03.687 { 00:29:03.687 "name": "ftl0", 00:29:03.687 "uuid": "f95c5d21-0b04-45bd-8eaf-594cb18bd776" 00:29:03.687 } 00:29:03.687 02:39:50 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:03.687 02:39:50 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:03.687 02:39:50 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:03.687 02:39:50 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:03.947 [2024-11-04 02:39:50.839347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.839380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:03.947 [2024-11-04 02:39:50.839390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:03.947 [2024-11-04 02:39:50.839403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.839420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:03.947 [2024-11-04 02:39:50.841472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.841494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:03.947 [2024-11-04 02:39:50.841503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 00:29:03.947 [2024-11-04 02:39:50.841510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.841716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.841725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:03.947 [2024-11-04 02:39:50.841733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:29:03.947 [2024-11-04 02:39:50.841740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.844219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.844235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:03.947 [2024-11-04 02:39:50.844243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.466 ms 00:29:03.947 [2024-11-04 02:39:50.844249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.848963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.849081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:03.947 [2024-11-04 02:39:50.849097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.699 ms 00:29:03.947 [2024-11-04 02:39:50.849103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.867261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.867286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:03.947 [2024-11-04 02:39:50.867296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.114 ms 00:29:03.947 [2024-11-04 02:39:50.867302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.879709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.879735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:03.947 [2024-11-04 02:39:50.879746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.376 ms 00:29:03.947 [2024-11-04 02:39:50.879753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.879879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.879888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:03.947 [2024-11-04 02:39:50.879896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:29:03.947 [2024-11-04 02:39:50.879902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.898228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.898252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:03.947 [2024-11-04 02:39:50.898262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.313 ms 00:29:03.947 [2024-11-04 02:39:50.898268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.916482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.916504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:03.947 [2024-11-04 02:39:50.916513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.186 ms 00:29:03.947 [2024-11-04 02:39:50.916519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.934451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.934474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:03.947 [2024-11-04 02:39:50.934483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.901 ms 00:29:03.947 [2024-11-04 02:39:50.934489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.951648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.947 [2024-11-04 02:39:50.951672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:03.947 [2024-11-04 02:39:50.951682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.096 ms 00:29:03.947 [2024-11-04 02:39:50.951687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.947 [2024-11-04 02:39:50.951715] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:03.947 [2024-11-04 02:39:50.951726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:03.947 [2024-11-04 02:39:50.951736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:03.947 [2024-11-04 02:39:50.951742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:03.947 [2024-11-04 02:39:50.951750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:03.947 [2024-11-04 02:39:50.951755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.951993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:03.948 [2024-11-04 02:39:50.952369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:03.949 [2024-11-04 02:39:50.952376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:03.949 [2024-11-04 02:39:50.952381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:03.949 [2024-11-04 02:39:50.952388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:03.949 [2024-11-04 02:39:50.952393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:03.949 [2024-11-04 02:39:50.952401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:03.949 [2024-11-04 02:39:50.952413] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:03.949 [2024-11-04 02:39:50.952420] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f95c5d21-0b04-45bd-8eaf-594cb18bd776 00:29:03.949 [2024-11-04 02:39:50.952426] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:03.949 [2024-11-04 02:39:50.952436] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:03.949 [2024-11-04 02:39:50.952442] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:03.949 [2024-11-04 02:39:50.952448] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:03.949 [2024-11-04 02:39:50.952456] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:03.949 [2024-11-04 02:39:50.952463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:03.949 [2024-11-04 02:39:50.952468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:03.949 [2024-11-04 02:39:50.952474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:03.949 [2024-11-04 02:39:50.952479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:03.949 [2024-11-04 02:39:50.952485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.949 [2024-11-04 02:39:50.952491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:03.949 [2024-11-04 02:39:50.952498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:29:03.949 [2024-11-04 02:39:50.952505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.949 [2024-11-04 02:39:50.962128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.949 [2024-11-04 02:39:50.962238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:03.949 [2024-11-04 02:39:50.962252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.600 ms 00:29:03.949 [2024-11-04 02:39:50.962259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.949 [2024-11-04 02:39:50.962524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.949 [2024-11-04 02:39:50.962536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:03.949 [2024-11-04 02:39:50.962544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:29:03.949 [2024-11-04 02:39:50.962550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.949 [2024-11-04 02:39:50.995437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.949 [2024-11-04 02:39:50.995462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.949 [2024-11-04 02:39:50.995472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.949 [2024-11-04 02:39:50.995478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.949 [2024-11-04 02:39:50.995522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.949 [2024-11-04 02:39:50.995528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.949 [2024-11-04 02:39:50.995536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.949 [2024-11-04 02:39:50.995543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.949 [2024-11-04 02:39:50.995595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.949 [2024-11-04 02:39:50.995603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.949 [2024-11-04 02:39:50.995618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.949 [2024-11-04 02:39:50.995624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.949 [2024-11-04 02:39:50.995640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.949 [2024-11-04 02:39:50.995647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.949 [2024-11-04 02:39:50.995655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.949 [2024-11-04 02:39:50.995661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.949 [2024-11-04 02:39:51.054589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.949 [2024-11-04 02:39:51.054619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.949 [2024-11-04 02:39:51.054629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.949 [2024-11-04 02:39:51.054635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.208 [2024-11-04 02:39:51.102172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.208 [2024-11-04 02:39:51.102205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:04.208 [2024-11-04 02:39:51.102216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.208 [2024-11-04 02:39:51.102222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.208 [2024-11-04 02:39:51.102293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.208 [2024-11-04 02:39:51.102303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:04.208 [2024-11-04 02:39:51.102310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.208 [2024-11-04 02:39:51.102316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.208 [2024-11-04 02:39:51.102353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.208 [2024-11-04 02:39:51.102361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:04.208 [2024-11-04 02:39:51.102368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.208 [2024-11-04 02:39:51.102374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.208 [2024-11-04 02:39:51.102445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.208 [2024-11-04 02:39:51.102454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:04.208 [2024-11-04 02:39:51.102464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.208 [2024-11-04 02:39:51.102469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.208 [2024-11-04 02:39:51.102495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.208 [2024-11-04 02:39:51.102503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:04.208 [2024-11-04 02:39:51.102510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.208 [2024-11-04 02:39:51.102516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.208 [2024-11-04 02:39:51.102546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.208 [2024-11-04 02:39:51.102553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:04.208 [2024-11-04 02:39:51.102562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.208 [2024-11-04 02:39:51.102568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.208 [2024-11-04 02:39:51.102604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:04.208 [2024-11-04 02:39:51.102612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:04.208 [2024-11-04 02:39:51.102620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:04.208 [2024-11-04 02:39:51.102626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:04.208 [2024-11-04 02:39:51.102726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 263.350 ms, result 0 00:29:04.208 true 00:29:04.208 02:39:51 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 81323 00:29:04.208 02:39:51 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # '[' -z 81323 ']' 00:29:04.208 02:39:51 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # kill -0 81323 00:29:04.208 02:39:51 ftl.ftl_restore_fast -- common/autotest_common.sh@957 -- # uname 00:29:04.208 02:39:51 ftl.ftl_restore_fast -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:04.208 02:39:51 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81323 00:29:04.208 02:39:51 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:04.208 02:39:51 ftl.ftl_restore_fast -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:04.208 02:39:51 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81323' 00:29:04.208 killing process with pid 81323 00:29:04.208 02:39:51 ftl.ftl_restore_fast -- common/autotest_common.sh@971 -- # kill 81323 00:29:04.208 02:39:51 ftl.ftl_restore_fast -- common/autotest_common.sh@976 -- # wait 81323 00:29:10.790 02:39:56 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:14.107 262144+0 records in 00:29:14.107 262144+0 records out 00:29:14.107 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.24408 s, 253 MB/s 00:29:14.107 02:40:01 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:16.009 02:40:02 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:16.009 [2024-11-04 02:40:02.827069] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:29:16.009 [2024-11-04 02:40:02.827159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81544 ] 00:29:16.009 [2024-11-04 02:40:02.981559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.009 [2024-11-04 02:40:03.081236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.269 [2024-11-04 02:40:03.372392] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:16.269 [2024-11-04 02:40:03.372475] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:16.531 [2024-11-04 02:40:03.535338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.531 [2024-11-04 02:40:03.535624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:16.531 [2024-11-04 02:40:03.535656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:16.531 [2024-11-04 02:40:03.535666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.531 [2024-11-04 02:40:03.535739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.531 [2024-11-04 02:40:03.535750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:16.531 [2024-11-04 02:40:03.535761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:16.531 [2024-11-04 02:40:03.535770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.531 [2024-11-04 02:40:03.535791] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:16.531 [2024-11-04 02:40:03.536507] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:16.531 [2024-11-04 02:40:03.536532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.531 [2024-11-04 02:40:03.536541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:16.531 [2024-11-04 02:40:03.536550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:29:16.531 [2024-11-04 02:40:03.536558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.531 [2024-11-04 02:40:03.538326] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:16.531 [2024-11-04 02:40:03.553138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.531 [2024-11-04 02:40:03.553358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:16.531 [2024-11-04 02:40:03.553382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.813 ms 00:29:16.531 [2024-11-04 02:40:03.553392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.531 [2024-11-04 02:40:03.553553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.531 [2024-11-04 02:40:03.553586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:16.531 [2024-11-04 02:40:03.553598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:16.531 [2024-11-04 02:40:03.553607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.531 [2024-11-04 02:40:03.562194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.531 [2024-11-04 02:40:03.562241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:16.531 [2024-11-04 02:40:03.562253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.498 ms 00:29:16.531 [2024-11-04 02:40:03.562262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.531 [2024-11-04 02:40:03.562353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.531 [2024-11-04 02:40:03.562363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:16.531 [2024-11-04 02:40:03.562372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:16.531 [2024-11-04 02:40:03.562380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.531 [2024-11-04 02:40:03.562429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.531 [2024-11-04 02:40:03.562442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:16.532 [2024-11-04 02:40:03.562450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:16.532 [2024-11-04 02:40:03.562459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.532 [2024-11-04 02:40:03.562484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:16.532 [2024-11-04 02:40:03.566661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.532 [2024-11-04 02:40:03.566702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:16.532 [2024-11-04 02:40:03.566713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.183 ms 00:29:16.532 [2024-11-04 02:40:03.566725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.532 [2024-11-04 02:40:03.566763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.532 [2024-11-04 02:40:03.566772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:16.532 [2024-11-04 02:40:03.566781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:16.532 [2024-11-04 02:40:03.566789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.532 [2024-11-04 02:40:03.566844] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:16.532 [2024-11-04 02:40:03.566891] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:16.532 [2024-11-04 02:40:03.566931] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:16.532 [2024-11-04 02:40:03.566952] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:16.532 [2024-11-04 02:40:03.567059] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:16.532 [2024-11-04 02:40:03.567073] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:16.532 [2024-11-04 02:40:03.567085] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:16.532 [2024-11-04 02:40:03.567095] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:16.532 [2024-11-04 02:40:03.567105] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:16.532 [2024-11-04 02:40:03.567113] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:16.532 [2024-11-04 02:40:03.567123] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:16.532 [2024-11-04 02:40:03.567133] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:16.532 [2024-11-04 02:40:03.567141] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:16.532 [2024-11-04 02:40:03.567153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.532 [2024-11-04 02:40:03.567161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:16.532 [2024-11-04 02:40:03.567168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:29:16.532 [2024-11-04 02:40:03.567175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.532 [2024-11-04 02:40:03.567260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.532 [2024-11-04 02:40:03.567270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:16.532 [2024-11-04 02:40:03.567280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:16.532 [2024-11-04 02:40:03.567288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.532 [2024-11-04 02:40:03.567394] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:16.532 [2024-11-04 02:40:03.567409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:16.532 [2024-11-04 02:40:03.567418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:16.532 [2024-11-04 02:40:03.567431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:16.532 [2024-11-04 02:40:03.567449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:16.532 [2024-11-04 02:40:03.567464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:16.532 [2024-11-04 02:40:03.567472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:16.532 [2024-11-04 02:40:03.567486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:16.532 [2024-11-04 02:40:03.567493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:16.532 [2024-11-04 02:40:03.567499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:16.532 [2024-11-04 02:40:03.567510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:16.532 [2024-11-04 02:40:03.567518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:16.532 [2024-11-04 02:40:03.567531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:16.532 [2024-11-04 02:40:03.567545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:16.532 [2024-11-04 02:40:03.567552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:16.532 [2024-11-04 02:40:03.567566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.532 [2024-11-04 02:40:03.567581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:16.532 [2024-11-04 02:40:03.567589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.532 [2024-11-04 02:40:03.567603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:16.532 [2024-11-04 02:40:03.567625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.532 [2024-11-04 02:40:03.567639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:16.532 [2024-11-04 02:40:03.567646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.532 [2024-11-04 02:40:03.567660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:16.532 [2024-11-04 02:40:03.567668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:16.532 [2024-11-04 02:40:03.567684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:16.532 [2024-11-04 02:40:03.567691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:16.532 [2024-11-04 02:40:03.567698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:16.532 [2024-11-04 02:40:03.567705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:16.532 [2024-11-04 02:40:03.567712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:16.532 [2024-11-04 02:40:03.567720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:16.532 [2024-11-04 02:40:03.567735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:16.532 [2024-11-04 02:40:03.567741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567748] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:16.532 [2024-11-04 02:40:03.567756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:16.532 [2024-11-04 02:40:03.567766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:16.532 [2024-11-04 02:40:03.567774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.532 [2024-11-04 02:40:03.567782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:16.532 [2024-11-04 02:40:03.567798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:16.532 [2024-11-04 02:40:03.567805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:16.532 [2024-11-04 02:40:03.567812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:16.532 [2024-11-04 02:40:03.567819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:16.532 [2024-11-04 02:40:03.567825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:16.532 [2024-11-04 02:40:03.567833] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:16.532 [2024-11-04 02:40:03.567844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.532 [2024-11-04 02:40:03.567853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:16.532 [2024-11-04 02:40:03.567860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:16.532 [2024-11-04 02:40:03.567883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:16.532 [2024-11-04 02:40:03.567890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:16.532 [2024-11-04 02:40:03.567897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:16.532 [2024-11-04 02:40:03.567905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:16.532 [2024-11-04 02:40:03.567912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:16.532 [2024-11-04 02:40:03.567920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:16.532 [2024-11-04 02:40:03.567928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:16.532 [2024-11-04 02:40:03.567936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:16.532 [2024-11-04 02:40:03.567943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:16.532 [2024-11-04 02:40:03.567949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:16.532 [2024-11-04 02:40:03.567957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:16.533 [2024-11-04 02:40:03.567964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:16.533 [2024-11-04 02:40:03.567971] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:16.533 [2024-11-04 02:40:03.567981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.533 [2024-11-04 02:40:03.567992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:16.533 [2024-11-04 02:40:03.568000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:16.533 [2024-11-04 02:40:03.568008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:16.533 [2024-11-04 02:40:03.568015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:16.533 [2024-11-04 02:40:03.568024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.533 [2024-11-04 02:40:03.568033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:16.533 [2024-11-04 02:40:03.568045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:29:16.533 [2024-11-04 02:40:03.568053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.533 [2024-11-04 02:40:03.600527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.533 [2024-11-04 02:40:03.600730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:16.533 [2024-11-04 02:40:03.600750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.426 ms 00:29:16.533 [2024-11-04 02:40:03.600759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.533 [2024-11-04 02:40:03.600856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.533 [2024-11-04 02:40:03.600895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:16.533 [2024-11-04 02:40:03.600905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:16.533 [2024-11-04 02:40:03.600913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.793 [2024-11-04 02:40:03.642089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.793 [2024-11-04 02:40:03.642234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:16.793 [2024-11-04 02:40:03.642252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.109 ms 00:29:16.793 [2024-11-04 02:40:03.642261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.793 [2024-11-04 02:40:03.642300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.793 [2024-11-04 02:40:03.642312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:16.793 [2024-11-04 02:40:03.642321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:16.793 [2024-11-04 02:40:03.642333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.793 [2024-11-04 02:40:03.642707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.793 [2024-11-04 02:40:03.642724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:16.793 [2024-11-04 02:40:03.642733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:29:16.793 [2024-11-04 02:40:03.642741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.793 [2024-11-04 02:40:03.642892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.793 [2024-11-04 02:40:03.642904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:16.793 [2024-11-04 02:40:03.642913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:29:16.793 [2024-11-04 02:40:03.642921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.793 [2024-11-04 02:40:03.656355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.793 [2024-11-04 02:40:03.656385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:16.793 [2024-11-04 02:40:03.656396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.411 ms 00:29:16.793 [2024-11-04 02:40:03.656406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.793 [2024-11-04 02:40:03.669129] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:16.793 [2024-11-04 02:40:03.669169] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:16.793 [2024-11-04 02:40:03.669181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.793 [2024-11-04 02:40:03.669189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:16.793 [2024-11-04 02:40:03.669199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.682 ms 00:29:16.793 [2024-11-04 02:40:03.669206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.793 [2024-11-04 02:40:03.693406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.793 [2024-11-04 02:40:03.693453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:16.793 [2024-11-04 02:40:03.693469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.160 ms 00:29:16.793 [2024-11-04 02:40:03.693477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.793 [2024-11-04 02:40:03.705505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.794 [2024-11-04 02:40:03.705547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:16.794 [2024-11-04 02:40:03.705557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.985 ms 00:29:16.794 [2024-11-04 02:40:03.705564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.794 [2024-11-04 02:40:03.717417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.794 [2024-11-04 02:40:03.717450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:16.794 [2024-11-04 02:40:03.717461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.813 ms 00:29:16.794 [2024-11-04 02:40:03.717468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.794 [2024-11-04 02:40:03.718100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.794 [2024-11-04 02:40:03.718124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:16.794 [2024-11-04 02:40:03.718134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:29:16.794 [2024-11-04 02:40:03.718142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.794 [2024-11-04 02:40:03.777075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.794 [2024-11-04 02:40:03.777123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:16.794 [2024-11-04 02:40:03.777136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.915 ms 00:29:16.794 [2024-11-04 02:40:03.777145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.794 [2024-11-04 02:40:03.787860] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:16.794 [2024-11-04 02:40:03.790706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.794 [2024-11-04 02:40:03.790745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:16.794 [2024-11-04 02:40:03.790757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.510 ms 00:29:16.794 [2024-11-04 02:40:03.790765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.794 [2024-11-04 02:40:03.790901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.794 [2024-11-04 02:40:03.790914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:16.794 [2024-11-04 02:40:03.790925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:16.794 [2024-11-04 02:40:03.790934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.794 [2024-11-04 02:40:03.791005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.794 [2024-11-04 02:40:03.791019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:16.794 [2024-11-04 02:40:03.791028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:16.794 [2024-11-04 02:40:03.791036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.794 [2024-11-04 02:40:03.791061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.794 [2024-11-04 02:40:03.791070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:16.794 [2024-11-04 02:40:03.791079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:16.794 [2024-11-04 02:40:03.791086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.794 [2024-11-04 02:40:03.791114] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:16.794 [2024-11-04 02:40:03.791124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.794 [2024-11-04 02:40:03.791133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:16.794 [2024-11-04 02:40:03.791144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:16.794 [2024-11-04 02:40:03.791152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.794 [2024-11-04 02:40:03.816116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.794 [2024-11-04 02:40:03.816160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:16.794 [2024-11-04 02:40:03.816173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.942 ms 00:29:16.794 [2024-11-04 02:40:03.816182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.794 [2024-11-04 02:40:03.816270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.794 [2024-11-04 02:40:03.816281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:16.794 [2024-11-04 02:40:03.816290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:16.794 [2024-11-04 02:40:03.816299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.794 [2024-11-04 02:40:03.817530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 281.734 ms, result 0 00:29:17.734  [2024-11-04T02:40:06.225Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-04T02:40:07.167Z] Copying: 33/1024 [MB] (19 MBps) [2024-11-04T02:40:08.106Z] Copying: 44/1024 [MB] (11 MBps) [2024-11-04T02:40:09.084Z] Copying: 56/1024 [MB] (11 MBps) [2024-11-04T02:40:10.027Z] Copying: 66/1024 [MB] (10 MBps) [2024-11-04T02:40:10.970Z] Copying: 80/1024 [MB] (13 MBps) [2024-11-04T02:40:11.912Z] Copying: 96/1024 [MB] (16 MBps) [2024-11-04T02:40:12.851Z] Copying: 110/1024 [MB] (14 MBps) [2024-11-04T02:40:14.250Z] Copying: 131/1024 [MB] (20 MBps) [2024-11-04T02:40:15.182Z] Copying: 141/1024 [MB] (10 MBps) [2024-11-04T02:40:16.114Z] Copying: 165/1024 [MB] (24 MBps) [2024-11-04T02:40:17.051Z] Copying: 198/1024 [MB] (32 MBps) [2024-11-04T02:40:17.988Z] Copying: 220/1024 [MB] (22 MBps) [2024-11-04T02:40:18.931Z] Copying: 234/1024 [MB] (14 MBps) [2024-11-04T02:40:19.872Z] Copying: 246/1024 [MB] (12 MBps) [2024-11-04T02:40:21.252Z] Copying: 266/1024 [MB] (20 MBps) [2024-11-04T02:40:22.186Z] Copying: 282/1024 [MB] (15 MBps) [2024-11-04T02:40:23.120Z] Copying: 312/1024 [MB] (29 MBps) [2024-11-04T02:40:24.054Z] Copying: 347/1024 [MB] (35 MBps) [2024-11-04T02:40:24.987Z] Copying: 378/1024 [MB] (30 MBps) [2024-11-04T02:40:25.958Z] Copying: 411/1024 [MB] (33 MBps) [2024-11-04T02:40:26.909Z] Copying: 448/1024 [MB] (36 MBps) [2024-11-04T02:40:27.842Z] Copying: 477/1024 [MB] (29 MBps) [2024-11-04T02:40:29.222Z] Copying: 508/1024 [MB] (31 MBps) [2024-11-04T02:40:30.175Z] Copying: 528/1024 [MB] (19 MBps) [2024-11-04T02:40:31.119Z] Copying: 541/1024 [MB] (12 MBps) [2024-11-04T02:40:32.054Z] Copying: 561/1024 [MB] (20 MBps) [2024-11-04T02:40:32.988Z] Copying: 595/1024 [MB] (33 MBps) [2024-11-04T02:40:33.931Z] Copying: 639/1024 [MB] (44 MBps) [2024-11-04T02:40:34.883Z] Copying: 666/1024 [MB] (27 MBps) [2024-11-04T02:40:36.269Z] Copying: 679/1024 [MB] (12 MBps) [2024-11-04T02:40:36.840Z] Copying: 689/1024 [MB] (10 MBps) [2024-11-04T02:40:38.222Z] Copying: 704/1024 [MB] (14 MBps) [2024-11-04T02:40:39.163Z] Copying: 719/1024 [MB] (14 MBps) [2024-11-04T02:40:40.095Z] Copying: 730/1024 [MB] (10 MBps) [2024-11-04T02:40:41.028Z] Copying: 763/1024 [MB] (32 MBps) [2024-11-04T02:40:41.968Z] Copying: 784/1024 [MB] (21 MBps) [2024-11-04T02:40:42.913Z] Copying: 798/1024 [MB] (13 MBps) [2024-11-04T02:40:43.926Z] Copying: 808/1024 [MB] (10 MBps) [2024-11-04T02:40:44.870Z] Copying: 818/1024 [MB] (10 MBps) [2024-11-04T02:40:46.252Z] Copying: 828/1024 [MB] (10 MBps) [2024-11-04T02:40:47.195Z] Copying: 839/1024 [MB] (10 MBps) [2024-11-04T02:40:48.141Z] Copying: 855/1024 [MB] (16 MBps) [2024-11-04T02:40:49.077Z] Copying: 866/1024 [MB] (10 MBps) [2024-11-04T02:40:50.016Z] Copying: 885/1024 [MB] (19 MBps) [2024-11-04T02:40:50.956Z] Copying: 897/1024 [MB] (12 MBps) [2024-11-04T02:40:51.897Z] Copying: 914/1024 [MB] (16 MBps) [2024-11-04T02:40:52.840Z] Copying: 929/1024 [MB] (15 MBps) [2024-11-04T02:40:54.229Z] Copying: 945/1024 [MB] (15 MBps) [2024-11-04T02:40:55.161Z] Copying: 958/1024 [MB] (12 MBps) [2024-11-04T02:40:56.092Z] Copying: 984/1024 [MB] (26 MBps) [2024-11-04T02:40:56.352Z] Copying: 1010/1024 [MB] (25 MBps) [2024-11-04T02:40:56.352Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-04 02:40:56.329012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.241 [2024-11-04 02:40:56.329047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:09.241 [2024-11-04 02:40:56.329059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:09.241 [2024-11-04 02:40:56.329065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.241 [2024-11-04 02:40:56.329081] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:09.241 [2024-11-04 02:40:56.331157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.241 [2024-11-04 02:40:56.331182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:09.241 [2024-11-04 02:40:56.331191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:30:09.241 [2024-11-04 02:40:56.331197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.241 [2024-11-04 02:40:56.332746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.241 [2024-11-04 02:40:56.332774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:09.241 [2024-11-04 02:40:56.332782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:30:09.241 [2024-11-04 02:40:56.332788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.241 [2024-11-04 02:40:56.332808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.241 [2024-11-04 02:40:56.332814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:09.241 [2024-11-04 02:40:56.332820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:09.241 [2024-11-04 02:40:56.332826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.241 [2024-11-04 02:40:56.332863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.241 [2024-11-04 02:40:56.332880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:09.241 [2024-11-04 02:40:56.332888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:09.241 [2024-11-04 02:40:56.332894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.241 [2024-11-04 02:40:56.332904] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:09.241 [2024-11-04 02:40:56.332914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.332998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:09.241 [2024-11-04 02:40:56.333176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:09.242 [2024-11-04 02:40:56.333508] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:09.242 [2024-11-04 02:40:56.333514] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f95c5d21-0b04-45bd-8eaf-594cb18bd776 00:30:09.242 [2024-11-04 02:40:56.333519] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:09.242 [2024-11-04 02:40:56.333525] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:09.242 [2024-11-04 02:40:56.333531] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:09.242 [2024-11-04 02:40:56.333537] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:09.242 [2024-11-04 02:40:56.333542] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:09.242 [2024-11-04 02:40:56.333549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:09.242 [2024-11-04 02:40:56.333555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:09.242 [2024-11-04 02:40:56.333559] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:09.242 [2024-11-04 02:40:56.333564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:09.242 [2024-11-04 02:40:56.333569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.242 [2024-11-04 02:40:56.333575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:09.242 [2024-11-04 02:40:56.333584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:30:09.242 [2024-11-04 02:40:56.333590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.242 [2024-11-04 02:40:56.343425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.242 [2024-11-04 02:40:56.343453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:09.242 [2024-11-04 02:40:56.343466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.823 ms 00:30:09.242 [2024-11-04 02:40:56.343472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.242 [2024-11-04 02:40:56.343759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.242 [2024-11-04 02:40:56.343771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:09.242 [2024-11-04 02:40:56.343778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:30:09.242 [2024-11-04 02:40:56.343783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.501 [2024-11-04 02:40:56.369242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.501 [2024-11-04 02:40:56.369270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:09.501 [2024-11-04 02:40:56.369279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.501 [2024-11-04 02:40:56.369285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.501 [2024-11-04 02:40:56.369325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.502 [2024-11-04 02:40:56.369331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:09.502 [2024-11-04 02:40:56.369337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.502 [2024-11-04 02:40:56.369343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.502 [2024-11-04 02:40:56.369375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.502 [2024-11-04 02:40:56.369383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:09.502 [2024-11-04 02:40:56.369389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.502 [2024-11-04 02:40:56.369396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.502 [2024-11-04 02:40:56.369407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.502 [2024-11-04 02:40:56.369413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:09.502 [2024-11-04 02:40:56.369419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.502 [2024-11-04 02:40:56.369424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.502 [2024-11-04 02:40:56.428944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.502 [2024-11-04 02:40:56.428974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:09.502 [2024-11-04 02:40:56.428986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.502 [2024-11-04 02:40:56.428992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.502 [2024-11-04 02:40:56.478108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.502 [2024-11-04 02:40:56.478142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:09.502 [2024-11-04 02:40:56.478151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.502 [2024-11-04 02:40:56.478158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.502 [2024-11-04 02:40:56.478212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.502 [2024-11-04 02:40:56.478220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:09.502 [2024-11-04 02:40:56.478226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.502 [2024-11-04 02:40:56.478232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.502 [2024-11-04 02:40:56.478262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.502 [2024-11-04 02:40:56.478269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:09.502 [2024-11-04 02:40:56.478274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.502 [2024-11-04 02:40:56.478281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.502 [2024-11-04 02:40:56.478337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.502 [2024-11-04 02:40:56.478345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:09.502 [2024-11-04 02:40:56.478351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.502 [2024-11-04 02:40:56.478357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.502 [2024-11-04 02:40:56.478382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.502 [2024-11-04 02:40:56.478390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:09.502 [2024-11-04 02:40:56.478396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.502 [2024-11-04 02:40:56.478402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.502 [2024-11-04 02:40:56.478429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.502 [2024-11-04 02:40:56.478436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:09.502 [2024-11-04 02:40:56.478443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.502 [2024-11-04 02:40:56.478449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.502 [2024-11-04 02:40:56.478482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.502 [2024-11-04 02:40:56.478489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:09.502 [2024-11-04 02:40:56.478495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.502 [2024-11-04 02:40:56.478500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.502 [2024-11-04 02:40:56.478589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 149.553 ms, result 0 00:30:10.071 00:30:10.071 00:30:10.071 02:40:57 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:10.071 [2024-11-04 02:40:57.112598] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:30:10.071 [2024-11-04 02:40:57.112729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82090 ] 00:30:10.330 [2024-11-04 02:40:57.269727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.330 [2024-11-04 02:40:57.353563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.589 [2024-11-04 02:40:57.557949] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:10.589 [2024-11-04 02:40:57.558043] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:10.849 [2024-11-04 02:40:57.711341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.849 [2024-11-04 02:40:57.711380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:10.849 [2024-11-04 02:40:57.711393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:10.849 [2024-11-04 02:40:57.711400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.849 [2024-11-04 02:40:57.711436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.850 [2024-11-04 02:40:57.711444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:10.850 [2024-11-04 02:40:57.711452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:10.850 [2024-11-04 02:40:57.711457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.850 [2024-11-04 02:40:57.711470] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:10.850 [2024-11-04 02:40:57.712012] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:10.850 [2024-11-04 02:40:57.712028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.850 [2024-11-04 02:40:57.712035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:10.850 [2024-11-04 02:40:57.712041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:30:10.850 [2024-11-04 02:40:57.712047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.850 [2024-11-04 02:40:57.712256] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:10.850 [2024-11-04 02:40:57.712274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.850 [2024-11-04 02:40:57.712281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:10.850 [2024-11-04 02:40:57.712290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:10.850 [2024-11-04 02:40:57.712296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.850 [2024-11-04 02:40:57.712352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.850 [2024-11-04 02:40:57.712361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:10.850 [2024-11-04 02:40:57.712367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:30:10.850 [2024-11-04 02:40:57.712372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.850 [2024-11-04 02:40:57.712566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.850 [2024-11-04 02:40:57.712575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:10.850 [2024-11-04 02:40:57.712583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:30:10.850 [2024-11-04 02:40:57.712588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.850 [2024-11-04 02:40:57.712636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.850 [2024-11-04 02:40:57.712644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:10.850 [2024-11-04 02:40:57.712650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:10.850 [2024-11-04 02:40:57.712656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.850 [2024-11-04 02:40:57.712673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.850 [2024-11-04 02:40:57.712681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:10.850 [2024-11-04 02:40:57.712688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:10.850 [2024-11-04 02:40:57.712695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.850 [2024-11-04 02:40:57.712707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:10.850 [2024-11-04 02:40:57.715534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.850 [2024-11-04 02:40:57.715559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:10.850 [2024-11-04 02:40:57.715567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.829 ms 00:30:10.850 [2024-11-04 02:40:57.715573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.850 [2024-11-04 02:40:57.715597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.850 [2024-11-04 02:40:57.715613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:10.850 [2024-11-04 02:40:57.715620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:10.850 [2024-11-04 02:40:57.715625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.850 [2024-11-04 02:40:57.715655] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:10.850 [2024-11-04 02:40:57.715671] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:10.850 [2024-11-04 02:40:57.715699] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:10.850 [2024-11-04 02:40:57.715710] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:10.850 [2024-11-04 02:40:57.715790] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:10.850 [2024-11-04 02:40:57.715799] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:10.850 [2024-11-04 02:40:57.715807] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:10.850 [2024-11-04 02:40:57.715814] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:10.850 [2024-11-04 02:40:57.715821] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:10.850 [2024-11-04 02:40:57.715827] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:10.850 [2024-11-04 02:40:57.715833] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:10.850 [2024-11-04 02:40:57.715841] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:10.850 [2024-11-04 02:40:57.715846] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:10.850 [2024-11-04 02:40:57.715852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.850 [2024-11-04 02:40:57.715857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:10.850 [2024-11-04 02:40:57.715863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:30:10.850 [2024-11-04 02:40:57.715885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.850 [2024-11-04 02:40:57.715948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.850 [2024-11-04 02:40:57.715956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:10.850 [2024-11-04 02:40:57.715962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:10.850 [2024-11-04 02:40:57.715968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.850 [2024-11-04 02:40:57.716045] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:10.850 [2024-11-04 02:40:57.716053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:10.850 [2024-11-04 02:40:57.716059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:10.850 [2024-11-04 02:40:57.716065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:10.850 [2024-11-04 02:40:57.716078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:10.850 [2024-11-04 02:40:57.716090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:10.850 [2024-11-04 02:40:57.716095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:10.850 [2024-11-04 02:40:57.716107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:10.850 [2024-11-04 02:40:57.716113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:10.850 [2024-11-04 02:40:57.716118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:10.850 [2024-11-04 02:40:57.716122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:10.850 [2024-11-04 02:40:57.716127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:10.850 [2024-11-04 02:40:57.716132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:10.850 [2024-11-04 02:40:57.716146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:10.850 [2024-11-04 02:40:57.716151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:10.850 [2024-11-04 02:40:57.716161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:10.850 [2024-11-04 02:40:57.716170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:10.850 [2024-11-04 02:40:57.716177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:10.850 [2024-11-04 02:40:57.716187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:10.850 [2024-11-04 02:40:57.716191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:10.850 [2024-11-04 02:40:57.716201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:10.850 [2024-11-04 02:40:57.716206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:10.850 [2024-11-04 02:40:57.716215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:10.850 [2024-11-04 02:40:57.716220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:10.850 [2024-11-04 02:40:57.716229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:10.850 [2024-11-04 02:40:57.716234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:10.850 [2024-11-04 02:40:57.716239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:10.850 [2024-11-04 02:40:57.716246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:10.850 [2024-11-04 02:40:57.716251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:10.850 [2024-11-04 02:40:57.716257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:10.850 [2024-11-04 02:40:57.716267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:10.850 [2024-11-04 02:40:57.716272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.850 [2024-11-04 02:40:57.716277] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:10.851 [2024-11-04 02:40:57.716283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:10.851 [2024-11-04 02:40:57.716288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:10.851 [2024-11-04 02:40:57.716293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.851 [2024-11-04 02:40:57.716299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:10.851 [2024-11-04 02:40:57.716303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:10.851 [2024-11-04 02:40:57.716308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:10.851 [2024-11-04 02:40:57.716314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:10.851 [2024-11-04 02:40:57.716319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:10.851 [2024-11-04 02:40:57.716324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:10.851 [2024-11-04 02:40:57.716330] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:10.851 [2024-11-04 02:40:57.716337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:10.851 [2024-11-04 02:40:57.716345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:10.851 [2024-11-04 02:40:57.716350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:10.851 [2024-11-04 02:40:57.716355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:10.851 [2024-11-04 02:40:57.716361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:10.851 [2024-11-04 02:40:57.716366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:10.851 [2024-11-04 02:40:57.716371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:10.851 [2024-11-04 02:40:57.716376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:10.851 [2024-11-04 02:40:57.716382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:10.851 [2024-11-04 02:40:57.716388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:10.851 [2024-11-04 02:40:57.716393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:10.851 [2024-11-04 02:40:57.716399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:10.851 [2024-11-04 02:40:57.716404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:10.851 [2024-11-04 02:40:57.716409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:10.851 [2024-11-04 02:40:57.716415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:10.851 [2024-11-04 02:40:57.716420] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:10.851 [2024-11-04 02:40:57.716426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:10.851 [2024-11-04 02:40:57.716433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:10.851 [2024-11-04 02:40:57.716438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:10.851 [2024-11-04 02:40:57.716444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:10.851 [2024-11-04 02:40:57.716450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:10.851 [2024-11-04 02:40:57.716456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.716461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:10.851 [2024-11-04 02:40:57.716466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:30:10.851 [2024-11-04 02:40:57.716472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.734858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.734889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:10.851 [2024-11-04 02:40:57.734898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.358 ms 00:30:10.851 [2024-11-04 02:40:57.734903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.734963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.734970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:10.851 [2024-11-04 02:40:57.734976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:10.851 [2024-11-04 02:40:57.734984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.781219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.781251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:10.851 [2024-11-04 02:40:57.781261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.199 ms 00:30:10.851 [2024-11-04 02:40:57.781268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.781299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.781309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:10.851 [2024-11-04 02:40:57.781315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:10.851 [2024-11-04 02:40:57.781321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.781397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.781406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:10.851 [2024-11-04 02:40:57.781413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:10.851 [2024-11-04 02:40:57.781418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.781508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.781515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:10.851 [2024-11-04 02:40:57.781523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:30:10.851 [2024-11-04 02:40:57.781529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.792132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.792264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:10.851 [2024-11-04 02:40:57.792277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.587 ms 00:30:10.851 [2024-11-04 02:40:57.792283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.792370] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:10.851 [2024-11-04 02:40:57.792380] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:10.851 [2024-11-04 02:40:57.792390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.792396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:10.851 [2024-11-04 02:40:57.792403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:10.851 [2024-11-04 02:40:57.792410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.801654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.801676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:10.851 [2024-11-04 02:40:57.801684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.232 ms 00:30:10.851 [2024-11-04 02:40:57.801691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.801775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.801782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:10.851 [2024-11-04 02:40:57.801788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:10.851 [2024-11-04 02:40:57.801794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.801816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.801825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:10.851 [2024-11-04 02:40:57.801832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:10.851 [2024-11-04 02:40:57.801838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.802324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.802343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:10.851 [2024-11-04 02:40:57.802349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:30:10.851 [2024-11-04 02:40:57.802355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.802366] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:10.851 [2024-11-04 02:40:57.802376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.802387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:10.851 [2024-11-04 02:40:57.802395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:10.851 [2024-11-04 02:40:57.802401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.811586] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:10.851 [2024-11-04 02:40:57.811690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.811702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:10.851 [2024-11-04 02:40:57.811709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.263 ms 00:30:10.851 [2024-11-04 02:40:57.811715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.813332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.813350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:10.851 [2024-11-04 02:40:57.813359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:30:10.851 [2024-11-04 02:40:57.813365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.851 [2024-11-04 02:40:57.813425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.851 [2024-11-04 02:40:57.813433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:10.851 [2024-11-04 02:40:57.813439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:10.852 [2024-11-04 02:40:57.813445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.852 [2024-11-04 02:40:57.813470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.852 [2024-11-04 02:40:57.813482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:10.852 [2024-11-04 02:40:57.813492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:10.852 [2024-11-04 02:40:57.813498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.852 [2024-11-04 02:40:57.813517] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:10.852 [2024-11-04 02:40:57.813525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.852 [2024-11-04 02:40:57.813531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:10.852 [2024-11-04 02:40:57.813536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:10.852 [2024-11-04 02:40:57.813542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.852 [2024-11-04 02:40:57.831645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.852 [2024-11-04 02:40:57.831675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:10.852 [2024-11-04 02:40:57.831683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.088 ms 00:30:10.852 [2024-11-04 02:40:57.831690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.852 [2024-11-04 02:40:57.831748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.852 [2024-11-04 02:40:57.831755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:10.852 [2024-11-04 02:40:57.831761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:10.852 [2024-11-04 02:40:57.831767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.852 [2024-11-04 02:40:57.832443] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 120.794 ms, result 0 00:30:12.238  [2024-11-04T02:41:00.353Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-04T02:41:01.293Z] Copying: 37/1024 [MB] (11 MBps) [2024-11-04T02:41:02.233Z] Copying: 52/1024 [MB] (14 MBps) [2024-11-04T02:41:03.184Z] Copying: 65/1024 [MB] (13 MBps) [2024-11-04T02:41:04.121Z] Copying: 83/1024 [MB] (17 MBps) [2024-11-04T02:41:05.063Z] Copying: 103/1024 [MB] (20 MBps) [2024-11-04T02:41:06.006Z] Copying: 125/1024 [MB] (22 MBps) [2024-11-04T02:41:07.392Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-04T02:41:08.334Z] Copying: 150/1024 [MB] (14 MBps) [2024-11-04T02:41:09.277Z] Copying: 167/1024 [MB] (16 MBps) [2024-11-04T02:41:10.221Z] Copying: 184/1024 [MB] (16 MBps) [2024-11-04T02:41:11.161Z] Copying: 194/1024 [MB] (10 MBps) [2024-11-04T02:41:12.107Z] Copying: 214/1024 [MB] (20 MBps) [2024-11-04T02:41:13.048Z] Copying: 232/1024 [MB] (18 MBps) [2024-11-04T02:41:13.989Z] Copying: 244/1024 [MB] (11 MBps) [2024-11-04T02:41:15.371Z] Copying: 255/1024 [MB] (10 MBps) [2024-11-04T02:41:16.315Z] Copying: 266/1024 [MB] (11 MBps) [2024-11-04T02:41:17.326Z] Copying: 277/1024 [MB] (10 MBps) [2024-11-04T02:41:18.273Z] Copying: 288/1024 [MB] (11 MBps) [2024-11-04T02:41:19.214Z] Copying: 299/1024 [MB] (10 MBps) [2024-11-04T02:41:20.156Z] Copying: 311/1024 [MB] (12 MBps) [2024-11-04T02:41:21.099Z] Copying: 322/1024 [MB] (10 MBps) [2024-11-04T02:41:22.042Z] Copying: 332/1024 [MB] (10 MBps) [2024-11-04T02:41:22.986Z] Copying: 343/1024 [MB] (10 MBps) [2024-11-04T02:41:24.372Z] Copying: 359/1024 [MB] (16 MBps) [2024-11-04T02:41:25.312Z] Copying: 378/1024 [MB] (19 MBps) [2024-11-04T02:41:26.254Z] Copying: 397/1024 [MB] (18 MBps) [2024-11-04T02:41:27.198Z] Copying: 408/1024 [MB] (11 MBps) [2024-11-04T02:41:28.141Z] Copying: 422/1024 [MB] (14 MBps) [2024-11-04T02:41:29.080Z] Copying: 436/1024 [MB] (13 MBps) [2024-11-04T02:41:30.019Z] Copying: 454/1024 [MB] (17 MBps) [2024-11-04T02:41:31.402Z] Copying: 471/1024 [MB] (17 MBps) [2024-11-04T02:41:31.973Z] Copying: 483/1024 [MB] (12 MBps) [2024-11-04T02:41:33.357Z] Copying: 497/1024 [MB] (14 MBps) [2024-11-04T02:41:34.303Z] Copying: 508/1024 [MB] (10 MBps) [2024-11-04T02:41:35.257Z] Copying: 519/1024 [MB] (10 MBps) [2024-11-04T02:41:36.194Z] Copying: 534/1024 [MB] (15 MBps) [2024-11-04T02:41:37.136Z] Copying: 554/1024 [MB] (20 MBps) [2024-11-04T02:41:38.080Z] Copying: 565/1024 [MB] (10 MBps) [2024-11-04T02:41:39.023Z] Copying: 582/1024 [MB] (16 MBps) [2024-11-04T02:41:40.410Z] Copying: 598/1024 [MB] (16 MBps) [2024-11-04T02:41:40.980Z] Copying: 609/1024 [MB] (10 MBps) [2024-11-04T02:41:42.361Z] Copying: 620/1024 [MB] (10 MBps) [2024-11-04T02:41:43.306Z] Copying: 644/1024 [MB] (23 MBps) [2024-11-04T02:41:44.250Z] Copying: 654/1024 [MB] (10 MBps) [2024-11-04T02:41:45.194Z] Copying: 665/1024 [MB] (10 MBps) [2024-11-04T02:41:46.136Z] Copying: 678/1024 [MB] (12 MBps) [2024-11-04T02:41:47.077Z] Copying: 690/1024 [MB] (12 MBps) [2024-11-04T02:41:48.018Z] Copying: 704/1024 [MB] (13 MBps) [2024-11-04T02:41:49.405Z] Copying: 722/1024 [MB] (18 MBps) [2024-11-04T02:41:49.976Z] Copying: 733/1024 [MB] (11 MBps) [2024-11-04T02:41:51.365Z] Copying: 752/1024 [MB] (18 MBps) [2024-11-04T02:41:51.985Z] Copying: 764/1024 [MB] (12 MBps) [2024-11-04T02:41:53.370Z] Copying: 781/1024 [MB] (16 MBps) [2024-11-04T02:41:54.310Z] Copying: 792/1024 [MB] (10 MBps) [2024-11-04T02:41:55.252Z] Copying: 803/1024 [MB] (11 MBps) [2024-11-04T02:41:56.193Z] Copying: 814/1024 [MB] (10 MBps) [2024-11-04T02:41:57.137Z] Copying: 828/1024 [MB] (14 MBps) [2024-11-04T02:41:58.078Z] Copying: 838/1024 [MB] (10 MBps) [2024-11-04T02:41:59.014Z] Copying: 854/1024 [MB] (15 MBps) [2024-11-04T02:42:00.402Z] Copying: 874/1024 [MB] (19 MBps) [2024-11-04T02:42:00.974Z] Copying: 895/1024 [MB] (21 MBps) [2024-11-04T02:42:02.362Z] Copying: 906/1024 [MB] (10 MBps) [2024-11-04T02:42:03.307Z] Copying: 917/1024 [MB] (11 MBps) [2024-11-04T02:42:04.250Z] Copying: 928/1024 [MB] (11 MBps) [2024-11-04T02:42:05.194Z] Copying: 943/1024 [MB] (15 MBps) [2024-11-04T02:42:06.136Z] Copying: 964/1024 [MB] (20 MBps) [2024-11-04T02:42:07.077Z] Copying: 977/1024 [MB] (12 MBps) [2024-11-04T02:42:08.021Z] Copying: 989/1024 [MB] (12 MBps) [2024-11-04T02:42:08.961Z] Copying: 1000/1024 [MB] (11 MBps) [2024-11-04T02:42:08.961Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-04 02:42:08.862643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.850 [2024-11-04 02:42:08.862743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:21.850 [2024-11-04 02:42:08.862770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:21.850 [2024-11-04 02:42:08.862786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.850 [2024-11-04 02:42:08.862827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:21.850 [2024-11-04 02:42:08.867419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.850 [2024-11-04 02:42:08.867465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:21.850 [2024-11-04 02:42:08.867479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.564 ms 00:31:21.850 [2024-11-04 02:42:08.867489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.850 [2024-11-04 02:42:08.867795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.850 [2024-11-04 02:42:08.867815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:21.850 [2024-11-04 02:42:08.867827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:31:21.850 [2024-11-04 02:42:08.867837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.850 [2024-11-04 02:42:08.867883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.850 [2024-11-04 02:42:08.867896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:21.850 [2024-11-04 02:42:08.867911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:21.850 [2024-11-04 02:42:08.867921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.850 [2024-11-04 02:42:08.867973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.850 [2024-11-04 02:42:08.867982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:21.850 [2024-11-04 02:42:08.867990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:21.850 [2024-11-04 02:42:08.867997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.850 [2024-11-04 02:42:08.868011] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:21.850 [2024-11-04 02:42:08.868023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:21.850 [2024-11-04 02:42:08.868213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:21.851 [2024-11-04 02:42:08.868728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:21.852 [2024-11-04 02:42:08.868737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:21.852 [2024-11-04 02:42:08.868745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:21.852 [2024-11-04 02:42:08.868753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:21.852 [2024-11-04 02:42:08.868761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:21.852 [2024-11-04 02:42:08.868768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:21.852 [2024-11-04 02:42:08.868775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:21.852 [2024-11-04 02:42:08.868783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:21.852 [2024-11-04 02:42:08.868790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:21.852 [2024-11-04 02:42:08.868798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:21.852 [2024-11-04 02:42:08.868813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:21.852 [2024-11-04 02:42:08.868821] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f95c5d21-0b04-45bd-8eaf-594cb18bd776 00:31:21.852 [2024-11-04 02:42:08.868831] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:21.852 [2024-11-04 02:42:08.868838] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:21.852 [2024-11-04 02:42:08.868845] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:21.852 [2024-11-04 02:42:08.868852] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:21.852 [2024-11-04 02:42:08.868859] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:21.852 [2024-11-04 02:42:08.868879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:21.852 [2024-11-04 02:42:08.868887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:21.852 [2024-11-04 02:42:08.868894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:21.852 [2024-11-04 02:42:08.868902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:21.852 [2024-11-04 02:42:08.868910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.852 [2024-11-04 02:42:08.868918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:21.852 [2024-11-04 02:42:08.868926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:31:21.852 [2024-11-04 02:42:08.868934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.852 [2024-11-04 02:42:08.881671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.852 [2024-11-04 02:42:08.881708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:21.852 [2024-11-04 02:42:08.881719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.723 ms 00:31:21.852 [2024-11-04 02:42:08.881727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.852 [2024-11-04 02:42:08.882103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.852 [2024-11-04 02:42:08.882157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:21.852 [2024-11-04 02:42:08.882166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:31:21.852 [2024-11-04 02:42:08.882173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.852 [2024-11-04 02:42:08.916643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:21.852 [2024-11-04 02:42:08.916691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:21.852 [2024-11-04 02:42:08.916703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:21.852 [2024-11-04 02:42:08.916712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.852 [2024-11-04 02:42:08.916783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:21.852 [2024-11-04 02:42:08.916793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:21.852 [2024-11-04 02:42:08.916802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:21.852 [2024-11-04 02:42:08.916811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.852 [2024-11-04 02:42:08.916893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:21.852 [2024-11-04 02:42:08.916907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:21.852 [2024-11-04 02:42:08.916916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:21.852 [2024-11-04 02:42:08.916925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.852 [2024-11-04 02:42:08.916942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:21.852 [2024-11-04 02:42:08.916951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:21.852 [2024-11-04 02:42:08.916962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:21.852 [2024-11-04 02:42:08.916971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.160 [2024-11-04 02:42:09.000234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.160 [2024-11-04 02:42:09.000294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:22.160 [2024-11-04 02:42:09.000308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.160 [2024-11-04 02:42:09.000317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.160 [2024-11-04 02:42:09.068848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.160 [2024-11-04 02:42:09.068924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:22.161 [2024-11-04 02:42:09.068936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.161 [2024-11-04 02:42:09.068945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.161 [2024-11-04 02:42:09.069032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.161 [2024-11-04 02:42:09.069043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:22.161 [2024-11-04 02:42:09.069052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.161 [2024-11-04 02:42:09.069061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.161 [2024-11-04 02:42:09.069101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.161 [2024-11-04 02:42:09.069116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:22.161 [2024-11-04 02:42:09.069125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.161 [2024-11-04 02:42:09.069133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.161 [2024-11-04 02:42:09.069217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.161 [2024-11-04 02:42:09.069229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:22.161 [2024-11-04 02:42:09.069238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.161 [2024-11-04 02:42:09.069246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.161 [2024-11-04 02:42:09.069272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.161 [2024-11-04 02:42:09.069281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:22.161 [2024-11-04 02:42:09.069291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.161 [2024-11-04 02:42:09.069299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.161 [2024-11-04 02:42:09.069338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.161 [2024-11-04 02:42:09.069349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:22.161 [2024-11-04 02:42:09.069358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.161 [2024-11-04 02:42:09.069366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.161 [2024-11-04 02:42:09.069408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:22.161 [2024-11-04 02:42:09.069419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:22.161 [2024-11-04 02:42:09.069427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:22.161 [2024-11-04 02:42:09.069435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.161 [2024-11-04 02:42:09.069566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 206.902 ms, result 0 00:31:22.746 00:31:22.746 00:31:22.746 02:42:09 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:25.291 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:25.291 02:42:12 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:25.291 [2024-11-04 02:42:12.152251] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:31:25.291 [2024-11-04 02:42:12.152359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82839 ] 00:31:25.291 [2024-11-04 02:42:12.302414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.291 [2024-11-04 02:42:12.376778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.549 [2024-11-04 02:42:12.580817] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:25.549 [2024-11-04 02:42:12.580860] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:25.809 [2024-11-04 02:42:12.727803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.809 [2024-11-04 02:42:12.727839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:25.809 [2024-11-04 02:42:12.727853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:25.809 [2024-11-04 02:42:12.727861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.809 [2024-11-04 02:42:12.727914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.809 [2024-11-04 02:42:12.727922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:25.809 [2024-11-04 02:42:12.727931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:25.809 [2024-11-04 02:42:12.727937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.809 [2024-11-04 02:42:12.727950] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:25.809 [2024-11-04 02:42:12.728484] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:25.809 [2024-11-04 02:42:12.728499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.809 [2024-11-04 02:42:12.728506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:25.809 [2024-11-04 02:42:12.728512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:31:25.809 [2024-11-04 02:42:12.728518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.809 [2024-11-04 02:42:12.728739] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:25.809 [2024-11-04 02:42:12.728762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.809 [2024-11-04 02:42:12.728769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:25.809 [2024-11-04 02:42:12.728778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:25.809 [2024-11-04 02:42:12.728783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.809 [2024-11-04 02:42:12.728815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.809 [2024-11-04 02:42:12.728821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:25.809 [2024-11-04 02:42:12.728827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:25.809 [2024-11-04 02:42:12.728833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.809 [2024-11-04 02:42:12.729037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.809 [2024-11-04 02:42:12.729046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:25.809 [2024-11-04 02:42:12.729054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:31:25.809 [2024-11-04 02:42:12.729060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.809 [2024-11-04 02:42:12.729108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.809 [2024-11-04 02:42:12.729114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:25.809 [2024-11-04 02:42:12.729120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:25.809 [2024-11-04 02:42:12.729125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.809 [2024-11-04 02:42:12.729140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.809 [2024-11-04 02:42:12.729148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:25.809 [2024-11-04 02:42:12.729154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:25.809 [2024-11-04 02:42:12.729161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.809 [2024-11-04 02:42:12.729174] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:25.809 [2024-11-04 02:42:12.732019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.809 [2024-11-04 02:42:12.732044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:25.809 [2024-11-04 02:42:12.732051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.848 ms 00:31:25.809 [2024-11-04 02:42:12.732057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.809 [2024-11-04 02:42:12.732081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.809 [2024-11-04 02:42:12.732087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:25.809 [2024-11-04 02:42:12.732093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:25.809 [2024-11-04 02:42:12.732099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.809 [2024-11-04 02:42:12.732130] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:25.809 [2024-11-04 02:42:12.732147] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:25.809 [2024-11-04 02:42:12.732175] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:25.809 [2024-11-04 02:42:12.732186] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:25.809 [2024-11-04 02:42:12.732264] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:25.810 [2024-11-04 02:42:12.732273] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:25.810 [2024-11-04 02:42:12.732281] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:25.810 [2024-11-04 02:42:12.732289] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:25.810 [2024-11-04 02:42:12.732296] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:25.810 [2024-11-04 02:42:12.732302] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:25.810 [2024-11-04 02:42:12.732308] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:25.810 [2024-11-04 02:42:12.732315] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:25.810 [2024-11-04 02:42:12.732321] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:25.810 [2024-11-04 02:42:12.732327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.810 [2024-11-04 02:42:12.732332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:25.810 [2024-11-04 02:42:12.732338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:31:25.810 [2024-11-04 02:42:12.732343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.810 [2024-11-04 02:42:12.732409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.810 [2024-11-04 02:42:12.732416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:25.810 [2024-11-04 02:42:12.732421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:25.810 [2024-11-04 02:42:12.732427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.810 [2024-11-04 02:42:12.732501] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:25.810 [2024-11-04 02:42:12.732510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:25.810 [2024-11-04 02:42:12.732516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:25.810 [2024-11-04 02:42:12.732522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:25.810 [2024-11-04 02:42:12.732535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:25.810 [2024-11-04 02:42:12.732548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:25.810 [2024-11-04 02:42:12.732554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:25.810 [2024-11-04 02:42:12.732565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:25.810 [2024-11-04 02:42:12.732570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:25.810 [2024-11-04 02:42:12.732575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:25.810 [2024-11-04 02:42:12.732580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:25.810 [2024-11-04 02:42:12.732585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:25.810 [2024-11-04 02:42:12.732590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:25.810 [2024-11-04 02:42:12.732606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:25.810 [2024-11-04 02:42:12.732611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:25.810 [2024-11-04 02:42:12.732621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.810 [2024-11-04 02:42:12.732630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:25.810 [2024-11-04 02:42:12.732635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.810 [2024-11-04 02:42:12.732645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:25.810 [2024-11-04 02:42:12.732650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.810 [2024-11-04 02:42:12.732660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:25.810 [2024-11-04 02:42:12.732665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.810 [2024-11-04 02:42:12.732675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:25.810 [2024-11-04 02:42:12.732679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:25.810 [2024-11-04 02:42:12.732689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:25.810 [2024-11-04 02:42:12.732694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:25.810 [2024-11-04 02:42:12.732699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:25.810 [2024-11-04 02:42:12.732703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:25.810 [2024-11-04 02:42:12.732708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:25.810 [2024-11-04 02:42:12.732713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:25.810 [2024-11-04 02:42:12.732724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:25.810 [2024-11-04 02:42:12.732730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732735] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:25.810 [2024-11-04 02:42:12.732742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:25.810 [2024-11-04 02:42:12.732748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:25.810 [2024-11-04 02:42:12.732753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.810 [2024-11-04 02:42:12.732759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:25.810 [2024-11-04 02:42:12.732764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:25.810 [2024-11-04 02:42:12.732769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:25.810 [2024-11-04 02:42:12.732774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:25.810 [2024-11-04 02:42:12.732779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:25.810 [2024-11-04 02:42:12.732784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:25.810 [2024-11-04 02:42:12.732790] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:25.810 [2024-11-04 02:42:12.732797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:25.810 [2024-11-04 02:42:12.732804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:25.810 [2024-11-04 02:42:12.732810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:25.810 [2024-11-04 02:42:12.732816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:25.810 [2024-11-04 02:42:12.732822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:25.810 [2024-11-04 02:42:12.732827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:25.810 [2024-11-04 02:42:12.732832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:25.810 [2024-11-04 02:42:12.732838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:25.810 [2024-11-04 02:42:12.732843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:25.810 [2024-11-04 02:42:12.732848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:25.810 [2024-11-04 02:42:12.732854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:25.810 [2024-11-04 02:42:12.732859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:25.810 [2024-11-04 02:42:12.732874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:25.810 [2024-11-04 02:42:12.732880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:25.810 [2024-11-04 02:42:12.732886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:25.810 [2024-11-04 02:42:12.732891] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:25.810 [2024-11-04 02:42:12.732897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:25.810 [2024-11-04 02:42:12.732904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:25.810 [2024-11-04 02:42:12.732911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:25.810 [2024-11-04 02:42:12.732917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:25.810 [2024-11-04 02:42:12.732923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:25.810 [2024-11-04 02:42:12.732929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.810 [2024-11-04 02:42:12.732935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:25.810 [2024-11-04 02:42:12.732941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:31:25.810 [2024-11-04 02:42:12.732946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.810 [2024-11-04 02:42:12.751298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.810 [2024-11-04 02:42:12.751391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:25.810 [2024-11-04 02:42:12.751431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.321 ms 00:31:25.811 [2024-11-04 02:42:12.751448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.751519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.751535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:25.811 [2024-11-04 02:42:12.751550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:31:25.811 [2024-11-04 02:42:12.751567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.790563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.790668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:25.811 [2024-11-04 02:42:12.790712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.940 ms 00:31:25.811 [2024-11-04 02:42:12.790730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.790770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.791050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:25.811 [2024-11-04 02:42:12.791088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:25.811 [2024-11-04 02:42:12.791107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.791235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.791297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:25.811 [2024-11-04 02:42:12.791332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:25.811 [2024-11-04 02:42:12.791350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.791483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.791503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:25.811 [2024-11-04 02:42:12.791540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:25.811 [2024-11-04 02:42:12.791579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.801928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.802013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:25.811 [2024-11-04 02:42:12.802051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.314 ms 00:31:25.811 [2024-11-04 02:42:12.802067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.802160] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:25.811 [2024-11-04 02:42:12.802358] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:25.811 [2024-11-04 02:42:12.802400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.802416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:25.811 [2024-11-04 02:42:12.802431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:31:25.811 [2024-11-04 02:42:12.802470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.811752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.811827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:25.811 [2024-11-04 02:42:12.811874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.257 ms 00:31:25.811 [2024-11-04 02:42:12.811891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.811986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.812003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:25.811 [2024-11-04 02:42:12.812040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:25.811 [2024-11-04 02:42:12.812057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.812090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.812112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:25.811 [2024-11-04 02:42:12.812230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:25.811 [2024-11-04 02:42:12.812253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.812704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.812771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:25.811 [2024-11-04 02:42:12.812810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:31:25.811 [2024-11-04 02:42:12.812826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.812848] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:25.811 [2024-11-04 02:42:12.812945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.812967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:25.811 [2024-11-04 02:42:12.812982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:31:25.811 [2024-11-04 02:42:12.812996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.821493] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:25.811 [2024-11-04 02:42:12.821651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.821699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:25.811 [2024-11-04 02:42:12.821778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.632 ms 00:31:25.811 [2024-11-04 02:42:12.821801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.823355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.823373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:25.811 [2024-11-04 02:42:12.823382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.527 ms 00:31:25.811 [2024-11-04 02:42:12.823389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.823446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.823453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:25.811 [2024-11-04 02:42:12.823460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:25.811 [2024-11-04 02:42:12.823465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.823491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.823497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:25.811 [2024-11-04 02:42:12.823505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:25.811 [2024-11-04 02:42:12.823511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.823530] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:25.811 [2024-11-04 02:42:12.823537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.823542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:25.811 [2024-11-04 02:42:12.823548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:25.811 [2024-11-04 02:42:12.823553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.841458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.841485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:25.811 [2024-11-04 02:42:12.841493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.890 ms 00:31:25.811 [2024-11-04 02:42:12.841499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.841550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.811 [2024-11-04 02:42:12.841557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:25.811 [2024-11-04 02:42:12.841564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:25.811 [2024-11-04 02:42:12.841569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.811 [2024-11-04 02:42:12.842243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 114.144 ms, result 0 00:31:26.745  [2024-11-04T02:42:15.228Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-04T02:42:16.162Z] Copying: 94/1024 [MB] (46 MBps) [2024-11-04T02:42:17.097Z] Copying: 123/1024 [MB] (28 MBps) [2024-11-04T02:42:18.041Z] Copying: 153/1024 [MB] (30 MBps) [2024-11-04T02:42:19.109Z] Copying: 164/1024 [MB] (10 MBps) [2024-11-04T02:42:20.047Z] Copying: 175/1024 [MB] (10 MBps) [2024-11-04T02:42:20.980Z] Copying: 191/1024 [MB] (15 MBps) [2024-11-04T02:42:21.914Z] Copying: 216/1024 [MB] (25 MBps) [2024-11-04T02:42:23.295Z] Copying: 242/1024 [MB] (26 MBps) [2024-11-04T02:42:23.861Z] Copying: 256/1024 [MB] (14 MBps) [2024-11-04T02:42:25.239Z] Copying: 282/1024 [MB] (26 MBps) [2024-11-04T02:42:26.173Z] Copying: 303/1024 [MB] (20 MBps) [2024-11-04T02:42:27.106Z] Copying: 326/1024 [MB] (23 MBps) [2024-11-04T02:42:28.039Z] Copying: 360/1024 [MB] (33 MBps) [2024-11-04T02:42:28.974Z] Copying: 389/1024 [MB] (29 MBps) [2024-11-04T02:42:29.916Z] Copying: 418/1024 [MB] (28 MBps) [2024-11-04T02:42:31.288Z] Copying: 440/1024 [MB] (22 MBps) [2024-11-04T02:42:32.221Z] Copying: 486/1024 [MB] (46 MBps) [2024-11-04T02:42:33.154Z] Copying: 532/1024 [MB] (45 MBps) [2024-11-04T02:42:34.090Z] Copying: 578/1024 [MB] (46 MBps) [2024-11-04T02:42:35.033Z] Copying: 625/1024 [MB] (46 MBps) [2024-11-04T02:42:35.975Z] Copying: 649/1024 [MB] (23 MBps) [2024-11-04T02:42:36.908Z] Copying: 667/1024 [MB] (17 MBps) [2024-11-04T02:42:38.284Z] Copying: 713/1024 [MB] (46 MBps) [2024-11-04T02:42:39.224Z] Copying: 757/1024 [MB] (44 MBps) [2024-11-04T02:42:40.173Z] Copying: 775/1024 [MB] (17 MBps) [2024-11-04T02:42:41.115Z] Copying: 793/1024 [MB] (18 MBps) [2024-11-04T02:42:42.061Z] Copying: 813/1024 [MB] (19 MBps) [2024-11-04T02:42:43.005Z] Copying: 828/1024 [MB] (15 MBps) [2024-11-04T02:42:43.950Z] Copying: 839/1024 [MB] (10 MBps) [2024-11-04T02:42:44.894Z] Copying: 849/1024 [MB] (10 MBps) [2024-11-04T02:42:46.283Z] Copying: 859/1024 [MB] (10 MBps) [2024-11-04T02:42:46.856Z] Copying: 870/1024 [MB] (10 MBps) [2024-11-04T02:42:48.306Z] Copying: 880/1024 [MB] (10 MBps) [2024-11-04T02:42:48.877Z] Copying: 899/1024 [MB] (18 MBps) [2024-11-04T02:42:50.262Z] Copying: 909/1024 [MB] (10 MBps) [2024-11-04T02:42:51.206Z] Copying: 919/1024 [MB] (10 MBps) [2024-11-04T02:42:52.151Z] Copying: 930/1024 [MB] (10 MBps) [2024-11-04T02:42:53.092Z] Copying: 941/1024 [MB] (10 MBps) [2024-11-04T02:42:54.039Z] Copying: 954/1024 [MB] (13 MBps) [2024-11-04T02:42:54.982Z] Copying: 969/1024 [MB] (15 MBps) [2024-11-04T02:42:55.925Z] Copying: 984/1024 [MB] (15 MBps) [2024-11-04T02:42:56.869Z] Copying: 1003/1024 [MB] (18 MBps) [2024-11-04T02:42:58.256Z] Copying: 1023/1024 [MB] (19 MBps) [2024-11-04T02:42:58.256Z] Copying: 1048540/1048576 [kB] (896 kBps) [2024-11-04T02:42:58.256Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-04 02:42:57.912521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.145 [2024-11-04 02:42:57.912615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:11.145 [2024-11-04 02:42:57.912638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:11.145 [2024-11-04 02:42:57.912649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.145 [2024-11-04 02:42:57.914835] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:11.145 [2024-11-04 02:42:57.920559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.145 [2024-11-04 02:42:57.920605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:11.145 [2024-11-04 02:42:57.920618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.655 ms 00:32:11.145 [2024-11-04 02:42:57.920627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.145 [2024-11-04 02:42:57.932196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.145 [2024-11-04 02:42:57.932377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:11.145 [2024-11-04 02:42:57.932408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.657 ms 00:32:11.145 [2024-11-04 02:42:57.932418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.145 [2024-11-04 02:42:57.932457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.145 [2024-11-04 02:42:57.932468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:11.145 [2024-11-04 02:42:57.932478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:11.145 [2024-11-04 02:42:57.932487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.145 [2024-11-04 02:42:57.932555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.145 [2024-11-04 02:42:57.932567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:11.145 [2024-11-04 02:42:57.932577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:11.145 [2024-11-04 02:42:57.932589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.145 [2024-11-04 02:42:57.932604] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:11.145 [2024-11-04 02:42:57.932618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:32:11.145 [2024-11-04 02:42:57.932629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:11.145 [2024-11-04 02:42:57.932839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.932999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:11.146 [2024-11-04 02:42:57.933496] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:11.146 [2024-11-04 02:42:57.933504] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f95c5d21-0b04-45bd-8eaf-594cb18bd776 00:32:11.146 [2024-11-04 02:42:57.933514] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:32:11.146 [2024-11-04 02:42:57.933521] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129056 00:32:11.146 [2024-11-04 02:42:57.933530] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:32:11.146 [2024-11-04 02:42:57.933539] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:32:11.146 [2024-11-04 02:42:57.933546] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:11.146 [2024-11-04 02:42:57.933556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:11.146 [2024-11-04 02:42:57.933567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:11.146 [2024-11-04 02:42:57.933574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:11.146 [2024-11-04 02:42:57.933581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:11.146 [2024-11-04 02:42:57.933587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.146 [2024-11-04 02:42:57.933595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:11.146 [2024-11-04 02:42:57.933603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:32:11.146 [2024-11-04 02:42:57.933611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.146 [2024-11-04 02:42:57.948366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.146 [2024-11-04 02:42:57.948411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:11.146 [2024-11-04 02:42:57.948423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.739 ms 00:32:11.146 [2024-11-04 02:42:57.948431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.146 [2024-11-04 02:42:57.948857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.147 [2024-11-04 02:42:57.948888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:11.147 [2024-11-04 02:42:57.948905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:32:11.147 [2024-11-04 02:42:57.948914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:57.988196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:57.988245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:11.147 [2024-11-04 02:42:57.988263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:57.988272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:57.988344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:57.988354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:11.147 [2024-11-04 02:42:57.988362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:57.988370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:57.988426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:57.988437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:11.147 [2024-11-04 02:42:57.988446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:57.988458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:57.988475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:57.988485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:11.147 [2024-11-04 02:42:57.988493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:57.988501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:58.079646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:58.079894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:11.147 [2024-11-04 02:42:58.079917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:58.079935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:58.157650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:58.157724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:11.147 [2024-11-04 02:42:58.157741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:58.157759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:58.157899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:58.157913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:11.147 [2024-11-04 02:42:58.157925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:58.157934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:58.157981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:58.157992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:11.147 [2024-11-04 02:42:58.158002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:58.158011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:58.158099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:58.158110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:11.147 [2024-11-04 02:42:58.158119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:58.158128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:58.158158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:58.158171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:11.147 [2024-11-04 02:42:58.158179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:58.158188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:58.158237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:58.158248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:11.147 [2024-11-04 02:42:58.158257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:58.158265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:58.158328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.147 [2024-11-04 02:42:58.158340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:11.147 [2024-11-04 02:42:58.158349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.147 [2024-11-04 02:42:58.158358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.147 [2024-11-04 02:42:58.158512] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 247.243 ms, result 0 00:32:12.534 00:32:12.534 00:32:12.534 02:42:59 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:12.534 [2024-11-04 02:42:59.497351] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:32:12.534 [2024-11-04 02:42:59.497505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83329 ] 00:32:12.795 [2024-11-04 02:42:59.662649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.795 [2024-11-04 02:42:59.787223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.056 [2024-11-04 02:43:00.080696] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:13.056 [2024-11-04 02:43:00.081006] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:13.319 [2024-11-04 02:43:00.243270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-04 02:43:00.243331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:13.319 [2024-11-04 02:43:00.243350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:13.319 [2024-11-04 02:43:00.243359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-04 02:43:00.243416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-04 02:43:00.243428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:13.319 [2024-11-04 02:43:00.243439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:13.319 [2024-11-04 02:43:00.243447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.319 [2024-11-04 02:43:00.243469] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:13.319 [2024-11-04 02:43:00.244388] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:13.319 [2024-11-04 02:43:00.244438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.319 [2024-11-04 02:43:00.244448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:13.319 [2024-11-04 02:43:00.244458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:32:13.320 [2024-11-04 02:43:00.244468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.320 [2024-11-04 02:43:00.244803] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:13.320 [2024-11-04 02:43:00.244833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.320 [2024-11-04 02:43:00.244842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:13.320 [2024-11-04 02:43:00.244856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:13.320 [2024-11-04 02:43:00.244885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.320 [2024-11-04 02:43:00.244945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.320 [2024-11-04 02:43:00.244957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:13.320 [2024-11-04 02:43:00.244966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:13.320 [2024-11-04 02:43:00.244973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.320 [2024-11-04 02:43:00.245244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.320 [2024-11-04 02:43:00.245258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:13.320 [2024-11-04 02:43:00.245270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:32:13.320 [2024-11-04 02:43:00.245278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.320 [2024-11-04 02:43:00.245349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.320 [2024-11-04 02:43:00.245361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:13.320 [2024-11-04 02:43:00.245369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:13.320 [2024-11-04 02:43:00.245377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.320 [2024-11-04 02:43:00.245401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.320 [2024-11-04 02:43:00.245411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:13.320 [2024-11-04 02:43:00.245420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:13.320 [2024-11-04 02:43:00.245434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.320 [2024-11-04 02:43:00.245461] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:13.320 [2024-11-04 02:43:00.249810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.320 [2024-11-04 02:43:00.249856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:13.320 [2024-11-04 02:43:00.249887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.354 ms 00:32:13.320 [2024-11-04 02:43:00.249896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.320 [2024-11-04 02:43:00.249932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.320 [2024-11-04 02:43:00.249940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:13.320 [2024-11-04 02:43:00.249948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:13.320 [2024-11-04 02:43:00.249956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.320 [2024-11-04 02:43:00.250016] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:13.320 [2024-11-04 02:43:00.250042] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:13.320 [2024-11-04 02:43:00.250083] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:13.320 [2024-11-04 02:43:00.250099] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:13.320 [2024-11-04 02:43:00.250204] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:13.320 [2024-11-04 02:43:00.250216] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:13.320 [2024-11-04 02:43:00.250228] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:13.320 [2024-11-04 02:43:00.250240] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:13.320 [2024-11-04 02:43:00.250250] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:13.320 [2024-11-04 02:43:00.250258] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:13.320 [2024-11-04 02:43:00.250266] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:13.320 [2024-11-04 02:43:00.250276] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:13.320 [2024-11-04 02:43:00.250284] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:13.320 [2024-11-04 02:43:00.250292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.320 [2024-11-04 02:43:00.250299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:13.320 [2024-11-04 02:43:00.250308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:32:13.320 [2024-11-04 02:43:00.250317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.320 [2024-11-04 02:43:00.250400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.320 [2024-11-04 02:43:00.250411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:13.320 [2024-11-04 02:43:00.250419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:13.320 [2024-11-04 02:43:00.250426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.320 [2024-11-04 02:43:00.250531] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:13.320 [2024-11-04 02:43:00.250544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:13.320 [2024-11-04 02:43:00.250553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:13.320 [2024-11-04 02:43:00.250561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:13.320 [2024-11-04 02:43:00.250576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:13.320 [2024-11-04 02:43:00.250592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:13.320 [2024-11-04 02:43:00.250600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:13.320 [2024-11-04 02:43:00.250619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:13.320 [2024-11-04 02:43:00.250627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:13.320 [2024-11-04 02:43:00.250635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:13.320 [2024-11-04 02:43:00.250642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:13.320 [2024-11-04 02:43:00.250649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:13.320 [2024-11-04 02:43:00.250656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:13.320 [2024-11-04 02:43:00.250676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:13.320 [2024-11-04 02:43:00.250683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:13.320 [2024-11-04 02:43:00.250697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.320 [2024-11-04 02:43:00.250713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:13.320 [2024-11-04 02:43:00.250720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.320 [2024-11-04 02:43:00.250732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:13.320 [2024-11-04 02:43:00.250739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.320 [2024-11-04 02:43:00.250751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:13.320 [2024-11-04 02:43:00.250758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.320 [2024-11-04 02:43:00.250770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:13.320 [2024-11-04 02:43:00.250777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:13.320 [2024-11-04 02:43:00.250792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:13.320 [2024-11-04 02:43:00.250798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:13.320 [2024-11-04 02:43:00.250804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:13.320 [2024-11-04 02:43:00.250811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:13.320 [2024-11-04 02:43:00.250817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:13.320 [2024-11-04 02:43:00.250823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:13.320 [2024-11-04 02:43:00.250836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:13.320 [2024-11-04 02:43:00.250843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250852] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:13.320 [2024-11-04 02:43:00.250882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:13.320 [2024-11-04 02:43:00.250892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:13.320 [2024-11-04 02:43:00.250901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.320 [2024-11-04 02:43:00.250910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:13.320 [2024-11-04 02:43:00.250917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:13.321 [2024-11-04 02:43:00.250924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:13.321 [2024-11-04 02:43:00.250932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:13.321 [2024-11-04 02:43:00.250939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:13.321 [2024-11-04 02:43:00.250946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:13.321 [2024-11-04 02:43:00.250955] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:13.321 [2024-11-04 02:43:00.250965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:13.321 [2024-11-04 02:43:00.250976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:13.321 [2024-11-04 02:43:00.250984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:13.321 [2024-11-04 02:43:00.250991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:13.321 [2024-11-04 02:43:00.250999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:13.321 [2024-11-04 02:43:00.251008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:13.321 [2024-11-04 02:43:00.251016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:13.321 [2024-11-04 02:43:00.251023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:13.321 [2024-11-04 02:43:00.251031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:13.321 [2024-11-04 02:43:00.251038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:13.321 [2024-11-04 02:43:00.251045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:13.321 [2024-11-04 02:43:00.251053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:13.321 [2024-11-04 02:43:00.251060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:13.321 [2024-11-04 02:43:00.251067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:13.321 [2024-11-04 02:43:00.251075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:13.321 [2024-11-04 02:43:00.251082] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:13.321 [2024-11-04 02:43:00.251091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:13.321 [2024-11-04 02:43:00.251102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:13.321 [2024-11-04 02:43:00.251111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:13.321 [2024-11-04 02:43:00.251118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:13.321 [2024-11-04 02:43:00.251125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:13.321 [2024-11-04 02:43:00.251134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.251141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:13.321 [2024-11-04 02:43:00.251149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:32:13.321 [2024-11-04 02:43:00.251156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.279542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.279745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:13.321 [2024-11-04 02:43:00.279809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.342 ms 00:32:13.321 [2024-11-04 02:43:00.279834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.279952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.279978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:13.321 [2024-11-04 02:43:00.279999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:13.321 [2024-11-04 02:43:00.280026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.322812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.323040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:13.321 [2024-11-04 02:43:00.323113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.709 ms 00:32:13.321 [2024-11-04 02:43:00.323140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.323202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.323236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:13.321 [2024-11-04 02:43:00.323259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:13.321 [2024-11-04 02:43:00.323278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.323417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.323449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:13.321 [2024-11-04 02:43:00.323553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:32:13.321 [2024-11-04 02:43:00.323582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.323761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.324342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:13.321 [2024-11-04 02:43:00.324858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:32:13.321 [2024-11-04 02:43:00.324938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.341019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.341202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:13.321 [2024-11-04 02:43:00.341222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.018 ms 00:32:13.321 [2024-11-04 02:43:00.341231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.341403] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:13.321 [2024-11-04 02:43:00.341419] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:13.321 [2024-11-04 02:43:00.341430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.341439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:13.321 [2024-11-04 02:43:00.341452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:13.321 [2024-11-04 02:43:00.341460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.353899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.354006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:13.321 [2024-11-04 02:43:00.354021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.419 ms 00:32:13.321 [2024-11-04 02:43:00.354029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.354137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.354145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:13.321 [2024-11-04 02:43:00.354153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:32:13.321 [2024-11-04 02:43:00.354160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.354207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.354217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:13.321 [2024-11-04 02:43:00.354225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:13.321 [2024-11-04 02:43:00.354233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.354781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.354793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:13.321 [2024-11-04 02:43:00.354801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:32:13.321 [2024-11-04 02:43:00.354808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.354822] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:13.321 [2024-11-04 02:43:00.354833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.354841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:13.321 [2024-11-04 02:43:00.354848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:13.321 [2024-11-04 02:43:00.354855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.365968] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:13.321 [2024-11-04 02:43:00.366100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.366110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:13.321 [2024-11-04 02:43:00.366120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.199 ms 00:32:13.321 [2024-11-04 02:43:00.366127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.368310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.368335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:13.321 [2024-11-04 02:43:00.368347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.165 ms 00:32:13.321 [2024-11-04 02:43:00.368355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.368415] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:13.321 [2024-11-04 02:43:00.368886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.368902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:13.321 [2024-11-04 02:43:00.368917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:32:13.321 [2024-11-04 02:43:00.368925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.321 [2024-11-04 02:43:00.368948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.321 [2024-11-04 02:43:00.368959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:13.321 [2024-11-04 02:43:00.368966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:13.321 [2024-11-04 02:43:00.368974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.322 [2024-11-04 02:43:00.369002] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:13.322 [2024-11-04 02:43:00.369011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.322 [2024-11-04 02:43:00.369018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:13.322 [2024-11-04 02:43:00.369025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:13.322 [2024-11-04 02:43:00.369032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.322 [2024-11-04 02:43:00.393523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.322 [2024-11-04 02:43:00.393662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:13.322 [2024-11-04 02:43:00.393678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.475 ms 00:32:13.322 [2024-11-04 02:43:00.393686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.322 [2024-11-04 02:43:00.393750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.322 [2024-11-04 02:43:00.393759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:13.322 [2024-11-04 02:43:00.393767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:13.322 [2024-11-04 02:43:00.393775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.322 [2024-11-04 02:43:00.394680] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.019 ms, result 0 00:32:14.710  [2024-11-04T02:43:02.766Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-04T02:43:03.712Z] Copying: 29/1024 [MB] (11 MBps) [2024-11-04T02:43:04.690Z] Copying: 43/1024 [MB] (13 MBps) [2024-11-04T02:43:05.632Z] Copying: 66/1024 [MB] (23 MBps) [2024-11-04T02:43:07.018Z] Copying: 82/1024 [MB] (15 MBps) [2024-11-04T02:43:07.962Z] Copying: 97/1024 [MB] (14 MBps) [2024-11-04T02:43:08.904Z] Copying: 113/1024 [MB] (16 MBps) [2024-11-04T02:43:09.846Z] Copying: 128/1024 [MB] (15 MBps) [2024-11-04T02:43:10.787Z] Copying: 146/1024 [MB] (17 MBps) [2024-11-04T02:43:11.726Z] Copying: 159/1024 [MB] (12 MBps) [2024-11-04T02:43:12.665Z] Copying: 182/1024 [MB] (23 MBps) [2024-11-04T02:43:13.605Z] Copying: 195/1024 [MB] (12 MBps) [2024-11-04T02:43:14.988Z] Copying: 213/1024 [MB] (17 MBps) [2024-11-04T02:43:15.929Z] Copying: 226/1024 [MB] (12 MBps) [2024-11-04T02:43:16.873Z] Copying: 242/1024 [MB] (16 MBps) [2024-11-04T02:43:17.816Z] Copying: 254/1024 [MB] (11 MBps) [2024-11-04T02:43:18.760Z] Copying: 265/1024 [MB] (10 MBps) [2024-11-04T02:43:19.704Z] Copying: 280/1024 [MB] (15 MBps) [2024-11-04T02:43:20.647Z] Copying: 291/1024 [MB] (10 MBps) [2024-11-04T02:43:22.036Z] Copying: 311/1024 [MB] (19 MBps) [2024-11-04T02:43:22.608Z] Copying: 329/1024 [MB] (17 MBps) [2024-11-04T02:43:23.996Z] Copying: 342/1024 [MB] (12 MBps) [2024-11-04T02:43:24.937Z] Copying: 355/1024 [MB] (13 MBps) [2024-11-04T02:43:25.914Z] Copying: 366/1024 [MB] (11 MBps) [2024-11-04T02:43:26.858Z] Copying: 383/1024 [MB] (17 MBps) [2024-11-04T02:43:27.803Z] Copying: 397/1024 [MB] (14 MBps) [2024-11-04T02:43:28.747Z] Copying: 409/1024 [MB] (11 MBps) [2024-11-04T02:43:29.689Z] Copying: 420/1024 [MB] (11 MBps) [2024-11-04T02:43:30.629Z] Copying: 431/1024 [MB] (11 MBps) [2024-11-04T02:43:32.017Z] Copying: 449/1024 [MB] (18 MBps) [2024-11-04T02:43:32.958Z] Copying: 462/1024 [MB] (13 MBps) [2024-11-04T02:43:33.898Z] Copying: 479/1024 [MB] (16 MBps) [2024-11-04T02:43:34.837Z] Copying: 490/1024 [MB] (10 MBps) [2024-11-04T02:43:35.780Z] Copying: 501/1024 [MB] (10 MBps) [2024-11-04T02:43:36.720Z] Copying: 512/1024 [MB] (11 MBps) [2024-11-04T02:43:37.665Z] Copying: 530/1024 [MB] (18 MBps) [2024-11-04T02:43:38.611Z] Copying: 540/1024 [MB] (10 MBps) [2024-11-04T02:43:39.996Z] Copying: 557/1024 [MB] (16 MBps) [2024-11-04T02:43:40.936Z] Copying: 568/1024 [MB] (10 MBps) [2024-11-04T02:43:41.875Z] Copying: 578/1024 [MB] (10 MBps) [2024-11-04T02:43:42.818Z] Copying: 594/1024 [MB] (15 MBps) [2024-11-04T02:43:43.758Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-04T02:43:44.700Z] Copying: 618/1024 [MB] (14 MBps) [2024-11-04T02:43:45.699Z] Copying: 630/1024 [MB] (11 MBps) [2024-11-04T02:43:46.644Z] Copying: 641/1024 [MB] (10 MBps) [2024-11-04T02:43:48.032Z] Copying: 651/1024 [MB] (10 MBps) [2024-11-04T02:43:48.605Z] Copying: 662/1024 [MB] (10 MBps) [2024-11-04T02:43:49.993Z] Copying: 673/1024 [MB] (10 MBps) [2024-11-04T02:43:50.937Z] Copying: 683/1024 [MB] (10 MBps) [2024-11-04T02:43:51.882Z] Copying: 697/1024 [MB] (13 MBps) [2024-11-04T02:43:52.825Z] Copying: 710/1024 [MB] (13 MBps) [2024-11-04T02:43:53.771Z] Copying: 727/1024 [MB] (16 MBps) [2024-11-04T02:43:54.712Z] Copying: 746/1024 [MB] (19 MBps) [2024-11-04T02:43:55.655Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-04T02:43:57.043Z] Copying: 773/1024 [MB] (16 MBps) [2024-11-04T02:43:57.616Z] Copying: 789/1024 [MB] (16 MBps) [2024-11-04T02:43:59.002Z] Copying: 801/1024 [MB] (12 MBps) [2024-11-04T02:43:59.945Z] Copying: 813/1024 [MB] (11 MBps) [2024-11-04T02:44:00.890Z] Copying: 825/1024 [MB] (11 MBps) [2024-11-04T02:44:01.833Z] Copying: 838/1024 [MB] (13 MBps) [2024-11-04T02:44:02.772Z] Copying: 858/1024 [MB] (20 MBps) [2024-11-04T02:44:03.716Z] Copying: 871/1024 [MB] (13 MBps) [2024-11-04T02:44:04.661Z] Copying: 886/1024 [MB] (14 MBps) [2024-11-04T02:44:05.606Z] Copying: 897/1024 [MB] (10 MBps) [2024-11-04T02:44:06.993Z] Copying: 910/1024 [MB] (13 MBps) [2024-11-04T02:44:07.936Z] Copying: 925/1024 [MB] (15 MBps) [2024-11-04T02:44:08.879Z] Copying: 939/1024 [MB] (13 MBps) [2024-11-04T02:44:09.824Z] Copying: 952/1024 [MB] (12 MBps) [2024-11-04T02:44:10.767Z] Copying: 962/1024 [MB] (10 MBps) [2024-11-04T02:44:11.710Z] Copying: 972/1024 [MB] (10 MBps) [2024-11-04T02:44:12.652Z] Copying: 983/1024 [MB] (10 MBps) [2024-11-04T02:44:14.038Z] Copying: 993/1024 [MB] (10 MBps) [2024-11-04T02:44:14.610Z] Copying: 1012/1024 [MB] (18 MBps) [2024-11-04T02:44:14.872Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-04T02:44:14.872Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-04 02:44:14.759812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.761 [2024-11-04 02:44:14.759931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:27.761 [2024-11-04 02:44:14.759954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:27.761 [2024-11-04 02:44:14.759966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.761 [2024-11-04 02:44:14.759996] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:27.761 [2024-11-04 02:44:14.763633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.761 [2024-11-04 02:44:14.763675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:27.761 [2024-11-04 02:44:14.763690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.617 ms 00:33:27.761 [2024-11-04 02:44:14.763701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.761 [2024-11-04 02:44:14.763999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.761 [2024-11-04 02:44:14.764016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:27.761 [2024-11-04 02:44:14.764028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:33:27.761 [2024-11-04 02:44:14.764038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.761 [2024-11-04 02:44:14.764073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.761 [2024-11-04 02:44:14.764084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:27.761 [2024-11-04 02:44:14.764094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:27.761 [2024-11-04 02:44:14.764103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.761 [2024-11-04 02:44:14.764176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.761 [2024-11-04 02:44:14.764187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:27.761 [2024-11-04 02:44:14.764200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:27.761 [2024-11-04 02:44:14.764210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.761 [2024-11-04 02:44:14.764227] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:27.761 [2024-11-04 02:44:14.764243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:27.761 [2024-11-04 02:44:14.764255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:27.761 [2024-11-04 02:44:14.764837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.764995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:27.762 [2024-11-04 02:44:14.765221] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:27.762 [2024-11-04 02:44:14.765231] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f95c5d21-0b04-45bd-8eaf-594cb18bd776 00:33:27.762 [2024-11-04 02:44:14.765240] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:27.762 [2024-11-04 02:44:14.765249] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2080 00:33:27.762 [2024-11-04 02:44:14.765258] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2048 00:33:27.762 [2024-11-04 02:44:14.765268] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0156 00:33:27.762 [2024-11-04 02:44:14.765276] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:27.762 [2024-11-04 02:44:14.765288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:27.762 [2024-11-04 02:44:14.765300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:27.762 [2024-11-04 02:44:14.765308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:27.762 [2024-11-04 02:44:14.765315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:27.762 [2024-11-04 02:44:14.765323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.762 [2024-11-04 02:44:14.765333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:27.762 [2024-11-04 02:44:14.765342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:33:27.762 [2024-11-04 02:44:14.765352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.762 [2024-11-04 02:44:14.786897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.762 [2024-11-04 02:44:14.786963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:27.762 [2024-11-04 02:44:14.786979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.521 ms 00:33:27.762 [2024-11-04 02:44:14.787001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.762 [2024-11-04 02:44:14.787439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:27.762 [2024-11-04 02:44:14.787454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:27.762 [2024-11-04 02:44:14.787463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:33:27.762 [2024-11-04 02:44:14.787471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.762 [2024-11-04 02:44:14.823815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:27.762 [2024-11-04 02:44:14.824036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:27.762 [2024-11-04 02:44:14.824066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:27.762 [2024-11-04 02:44:14.824076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.762 [2024-11-04 02:44:14.824162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:27.762 [2024-11-04 02:44:14.824172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:27.762 [2024-11-04 02:44:14.824181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:27.762 [2024-11-04 02:44:14.824189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.762 [2024-11-04 02:44:14.824257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:27.762 [2024-11-04 02:44:14.824267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:27.762 [2024-11-04 02:44:14.824276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:27.762 [2024-11-04 02:44:14.824288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:27.762 [2024-11-04 02:44:14.824306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:27.762 [2024-11-04 02:44:14.824316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:27.762 [2024-11-04 02:44:14.824325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:27.762 [2024-11-04 02:44:14.824333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.023 [2024-11-04 02:44:14.908901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.023 [2024-11-04 02:44:14.908958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:28.023 [2024-11-04 02:44:14.908979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.023 [2024-11-04 02:44:14.908988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.023 [2024-11-04 02:44:14.978808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.023 [2024-11-04 02:44:14.978890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:28.023 [2024-11-04 02:44:14.978910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.023 [2024-11-04 02:44:14.978919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.023 [2024-11-04 02:44:14.979010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.023 [2024-11-04 02:44:14.979042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:28.023 [2024-11-04 02:44:14.979052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.023 [2024-11-04 02:44:14.979061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.023 [2024-11-04 02:44:14.979105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.023 [2024-11-04 02:44:14.979115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:28.023 [2024-11-04 02:44:14.979123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.023 [2024-11-04 02:44:14.979132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.023 [2024-11-04 02:44:14.979215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.023 [2024-11-04 02:44:14.979226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:28.023 [2024-11-04 02:44:14.979235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.023 [2024-11-04 02:44:14.979245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.023 [2024-11-04 02:44:14.979276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.023 [2024-11-04 02:44:14.979286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:28.023 [2024-11-04 02:44:14.979294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.023 [2024-11-04 02:44:14.979303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.023 [2024-11-04 02:44:14.979347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.023 [2024-11-04 02:44:14.979359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:28.023 [2024-11-04 02:44:14.979367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.023 [2024-11-04 02:44:14.979375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.023 [2024-11-04 02:44:14.979429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.023 [2024-11-04 02:44:14.979443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:28.023 [2024-11-04 02:44:14.979452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.023 [2024-11-04 02:44:14.979460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.023 [2024-11-04 02:44:14.979620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 219.761 ms, result 0 00:33:28.964 00:33:28.964 00:33:28.964 02:44:15 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:30.911 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:30.911 02:44:17 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:30.911 02:44:17 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:30.911 02:44:17 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 81323 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # '[' -z 81323 ']' 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # kill -0 81323 00:33:31.172 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (81323) - No such process 00:33:31.172 Process with pid 81323 is not found 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- common/autotest_common.sh@979 -- # echo 'Process with pid 81323 is not found' 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:31.172 Remove shared memory files 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_band_md /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_l2p_l1 /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_l2p_l2 /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_l2p_l2_ctx /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_nvc_md /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_p2l_pool /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_sb /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_sb_shm /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_trim_bitmap /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_trim_log /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_trim_md /dev/hugepages/ftl_f95c5d21-0b04-45bd-8eaf-594cb18bd776_vmap 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:31.172 ************************************ 00:33:31.172 END TEST ftl_restore_fast 00:33:31.172 ************************************ 00:33:31.172 00:33:31.172 real 4m35.782s 00:33:31.172 user 4m23.022s 00:33:31.172 sys 0m12.737s 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:31.172 02:44:18 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:31.172 Process with pid 72134 is not found 00:33:31.172 02:44:18 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:31.172 02:44:18 ftl -- ftl/ftl.sh@14 -- # killprocess 72134 00:33:31.172 02:44:18 ftl -- common/autotest_common.sh@952 -- # '[' -z 72134 ']' 00:33:31.172 02:44:18 ftl -- common/autotest_common.sh@956 -- # kill -0 72134 00:33:31.172 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72134) - No such process 00:33:31.172 02:44:18 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 72134 is not found' 00:33:31.172 02:44:18 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:31.172 02:44:18 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84138 00:33:31.172 02:44:18 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84138 00:33:31.172 02:44:18 ftl -- common/autotest_common.sh@833 -- # '[' -z 84138 ']' 00:33:31.172 02:44:18 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.172 02:44:18 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:31.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.172 02:44:18 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.172 02:44:18 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:31.172 02:44:18 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:31.172 02:44:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:31.431 [2024-11-04 02:44:18.304252] Starting SPDK v25.01-pre git sha1 fa3ab7384 / DPDK 24.03.0 initialization... 00:33:31.431 [2024-11-04 02:44:18.304692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84138 ] 00:33:31.431 [2024-11-04 02:44:18.468971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.690 [2024-11-04 02:44:18.556772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.256 02:44:19 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:32.256 02:44:19 ftl -- common/autotest_common.sh@866 -- # return 0 00:33:32.256 02:44:19 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:32.513 nvme0n1 00:33:32.513 02:44:19 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:32.513 02:44:19 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:32.513 02:44:19 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:32.513 02:44:19 ftl -- ftl/common.sh@28 -- # stores=53d8121d-df7f-4fb3-b9a1-5b777512c1d9 00:33:32.513 02:44:19 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:32.513 02:44:19 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 53d8121d-df7f-4fb3-b9a1-5b777512c1d9 00:33:32.770 02:44:19 ftl -- ftl/ftl.sh@23 -- # killprocess 84138 00:33:32.770 02:44:19 ftl -- common/autotest_common.sh@952 -- # '[' -z 84138 ']' 00:33:32.770 02:44:19 ftl -- common/autotest_common.sh@956 -- # kill -0 84138 00:33:32.770 02:44:19 ftl -- common/autotest_common.sh@957 -- # uname 00:33:32.770 02:44:19 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:32.770 02:44:19 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84138 00:33:32.770 killing process with pid 84138 00:33:32.770 02:44:19 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:32.770 02:44:19 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:32.770 02:44:19 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84138' 00:33:32.770 02:44:19 ftl -- common/autotest_common.sh@971 -- # kill 84138 00:33:32.770 02:44:19 ftl -- common/autotest_common.sh@976 -- # wait 84138 00:33:34.144 02:44:20 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:34.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:34.144 Waiting for block devices as requested 00:33:34.403 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:34.403 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:34.403 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:34.403 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:39.666 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:39.666 02:44:26 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:39.666 Remove shared memory files 00:33:39.666 02:44:26 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:39.666 02:44:26 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:39.666 02:44:26 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:39.666 02:44:26 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:39.666 02:44:26 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:39.666 02:44:26 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:39.666 ************************************ 00:33:39.666 END TEST ftl 00:33:39.666 ************************************ 00:33:39.666 00:33:39.666 real 18m17.537s 00:33:39.666 user 20m34.607s 00:33:39.666 sys 1m23.708s 00:33:39.666 02:44:26 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:39.666 02:44:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:39.666 02:44:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:39.666 02:44:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:39.666 02:44:26 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:39.666 02:44:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:39.666 02:44:26 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:39.666 02:44:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:39.666 02:44:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:39.666 02:44:26 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:39.666 02:44:26 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:39.666 02:44:26 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:39.666 02:44:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:39.666 02:44:26 -- common/autotest_common.sh@10 -- # set +x 00:33:39.666 02:44:26 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:39.666 02:44:26 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:33:39.666 02:44:26 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:33:39.666 02:44:26 -- common/autotest_common.sh@10 -- # set +x 00:33:41.039 INFO: APP EXITING 00:33:41.039 INFO: killing all VMs 00:33:41.039 INFO: killing vhost app 00:33:41.039 INFO: EXIT DONE 00:33:41.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:41.297 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:41.297 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:41.297 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:41.297 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:41.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:42.120 Cleaning 00:33:42.120 Removing: /var/run/dpdk/spdk0/config 00:33:42.120 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:42.120 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:42.120 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:42.120 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:42.120 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:42.120 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:42.120 Removing: /var/run/dpdk/spdk0 00:33:42.120 Removing: /var/run/dpdk/spdk_pid56915 00:33:42.120 Removing: /var/run/dpdk/spdk_pid57117 00:33:42.120 Removing: /var/run/dpdk/spdk_pid57330 00:33:42.120 Removing: /var/run/dpdk/spdk_pid57423 00:33:42.120 Removing: /var/run/dpdk/spdk_pid57462 00:33:42.120 Removing: /var/run/dpdk/spdk_pid57579 00:33:42.120 Removing: /var/run/dpdk/spdk_pid57597 00:33:42.120 Removing: /var/run/dpdk/spdk_pid57785 00:33:42.120 Removing: /var/run/dpdk/spdk_pid57878 00:33:42.120 Removing: /var/run/dpdk/spdk_pid57974 00:33:42.120 Removing: /var/run/dpdk/spdk_pid58080 00:33:42.120 Removing: /var/run/dpdk/spdk_pid58171 00:33:42.120 Removing: /var/run/dpdk/spdk_pid58211 00:33:42.120 Removing: /var/run/dpdk/spdk_pid58247 00:33:42.120 Removing: /var/run/dpdk/spdk_pid58318 00:33:42.120 Removing: /var/run/dpdk/spdk_pid58406 00:33:42.120 Removing: /var/run/dpdk/spdk_pid58832 00:33:42.120 Removing: /var/run/dpdk/spdk_pid58891 00:33:42.120 Removing: /var/run/dpdk/spdk_pid58943 00:33:42.120 Removing: /var/run/dpdk/spdk_pid58959 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59050 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59066 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59157 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59173 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59226 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59244 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59297 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59315 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59464 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59506 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59584 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59762 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59840 00:33:42.120 Removing: /var/run/dpdk/spdk_pid59882 00:33:42.120 Removing: /var/run/dpdk/spdk_pid60304 00:33:42.120 Removing: /var/run/dpdk/spdk_pid60398 00:33:42.120 Removing: /var/run/dpdk/spdk_pid60507 00:33:42.120 Removing: /var/run/dpdk/spdk_pid60562 00:33:42.120 Removing: /var/run/dpdk/spdk_pid60582 00:33:42.120 Removing: /var/run/dpdk/spdk_pid60666 00:33:42.120 Removing: /var/run/dpdk/spdk_pid61285 00:33:42.120 Removing: /var/run/dpdk/spdk_pid61322 00:33:42.120 Removing: /var/run/dpdk/spdk_pid61778 00:33:42.120 Removing: /var/run/dpdk/spdk_pid61876 00:33:42.120 Removing: /var/run/dpdk/spdk_pid61992 00:33:42.120 Removing: /var/run/dpdk/spdk_pid62045 00:33:42.120 Removing: /var/run/dpdk/spdk_pid62065 00:33:42.120 Removing: /var/run/dpdk/spdk_pid62096 00:33:42.120 Removing: /var/run/dpdk/spdk_pid63933 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64070 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64074 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64086 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64132 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64136 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64148 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64193 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64197 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64209 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64254 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64258 00:33:42.120 Removing: /var/run/dpdk/spdk_pid64270 00:33:42.120 Removing: /var/run/dpdk/spdk_pid65637 00:33:42.120 Removing: /var/run/dpdk/spdk_pid65742 00:33:42.120 Removing: /var/run/dpdk/spdk_pid67147 00:33:42.120 Removing: /var/run/dpdk/spdk_pid68529 00:33:42.120 Removing: /var/run/dpdk/spdk_pid68612 00:33:42.120 Removing: /var/run/dpdk/spdk_pid68688 00:33:42.120 Removing: /var/run/dpdk/spdk_pid68764 00:33:42.120 Removing: /var/run/dpdk/spdk_pid68863 00:33:42.121 Removing: /var/run/dpdk/spdk_pid68936 00:33:42.121 Removing: /var/run/dpdk/spdk_pid69085 00:33:42.121 Removing: /var/run/dpdk/spdk_pid69435 00:33:42.121 Removing: /var/run/dpdk/spdk_pid69466 00:33:42.121 Removing: /var/run/dpdk/spdk_pid69905 00:33:42.121 Removing: /var/run/dpdk/spdk_pid70090 00:33:42.121 Removing: /var/run/dpdk/spdk_pid70196 00:33:42.121 Removing: /var/run/dpdk/spdk_pid70306 00:33:42.121 Removing: /var/run/dpdk/spdk_pid70354 00:33:42.121 Removing: /var/run/dpdk/spdk_pid70379 00:33:42.121 Removing: /var/run/dpdk/spdk_pid70667 00:33:42.121 Removing: /var/run/dpdk/spdk_pid70727 00:33:42.121 Removing: /var/run/dpdk/spdk_pid70800 00:33:42.121 Removing: /var/run/dpdk/spdk_pid71185 00:33:42.121 Removing: /var/run/dpdk/spdk_pid71330 00:33:42.121 Removing: /var/run/dpdk/spdk_pid72134 00:33:42.121 Removing: /var/run/dpdk/spdk_pid72261 00:33:42.121 Removing: /var/run/dpdk/spdk_pid72436 00:33:42.121 Removing: /var/run/dpdk/spdk_pid72540 00:33:42.121 Removing: /var/run/dpdk/spdk_pid72875 00:33:42.121 Removing: /var/run/dpdk/spdk_pid73206 00:33:42.121 Removing: /var/run/dpdk/spdk_pid73576 00:33:42.379 Removing: /var/run/dpdk/spdk_pid73764 00:33:42.379 Removing: /var/run/dpdk/spdk_pid73938 00:33:42.379 Removing: /var/run/dpdk/spdk_pid73991 00:33:42.379 Removing: /var/run/dpdk/spdk_pid74157 00:33:42.379 Removing: /var/run/dpdk/spdk_pid74193 00:33:42.379 Removing: /var/run/dpdk/spdk_pid74246 00:33:42.379 Removing: /var/run/dpdk/spdk_pid74525 00:33:42.379 Removing: /var/run/dpdk/spdk_pid74762 00:33:42.379 Removing: /var/run/dpdk/spdk_pid75406 00:33:42.379 Removing: /var/run/dpdk/spdk_pid76173 00:33:42.379 Removing: /var/run/dpdk/spdk_pid76767 00:33:42.379 Removing: /var/run/dpdk/spdk_pid77559 00:33:42.379 Removing: /var/run/dpdk/spdk_pid77701 00:33:42.379 Removing: /var/run/dpdk/spdk_pid77784 00:33:42.379 Removing: /var/run/dpdk/spdk_pid78313 00:33:42.379 Removing: /var/run/dpdk/spdk_pid78376 00:33:42.379 Removing: /var/run/dpdk/spdk_pid79004 00:33:42.379 Removing: /var/run/dpdk/spdk_pid79483 00:33:42.379 Removing: /var/run/dpdk/spdk_pid80258 00:33:42.379 Removing: /var/run/dpdk/spdk_pid80380 00:33:42.379 Removing: /var/run/dpdk/spdk_pid80427 00:33:42.379 Removing: /var/run/dpdk/spdk_pid80492 00:33:42.379 Removing: /var/run/dpdk/spdk_pid80551 00:33:42.379 Removing: /var/run/dpdk/spdk_pid80604 00:33:42.379 Removing: /var/run/dpdk/spdk_pid80806 00:33:42.379 Removing: /var/run/dpdk/spdk_pid80900 00:33:42.379 Removing: /var/run/dpdk/spdk_pid80973 00:33:42.379 Removing: /var/run/dpdk/spdk_pid81034 00:33:42.379 Removing: /var/run/dpdk/spdk_pid81069 00:33:42.379 Removing: /var/run/dpdk/spdk_pid81162 00:33:42.379 Removing: /var/run/dpdk/spdk_pid81323 00:33:42.379 Removing: /var/run/dpdk/spdk_pid81544 00:33:42.379 Removing: /var/run/dpdk/spdk_pid82090 00:33:42.379 Removing: /var/run/dpdk/spdk_pid82839 00:33:42.379 Removing: /var/run/dpdk/spdk_pid83329 00:33:42.379 Removing: /var/run/dpdk/spdk_pid84138 00:33:42.379 Clean 00:33:42.379 02:44:29 -- common/autotest_common.sh@1451 -- # return 0 00:33:42.379 02:44:29 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:42.379 02:44:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:42.379 02:44:29 -- common/autotest_common.sh@10 -- # set +x 00:33:42.379 02:44:29 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:42.379 02:44:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:42.379 02:44:29 -- common/autotest_common.sh@10 -- # set +x 00:33:42.379 02:44:29 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:42.379 02:44:29 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:42.379 02:44:29 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:42.379 02:44:29 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:42.379 02:44:29 -- spdk/autotest.sh@394 -- # hostname 00:33:42.379 02:44:29 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:42.641 geninfo: WARNING: invalid characters removed from testname! 00:34:09.212 02:44:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:10.598 02:44:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:13.904 02:45:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:16.450 02:45:02 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:18.386 02:45:05 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:20.295 02:45:07 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:22.841 02:45:09 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:22.841 02:45:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:22.841 02:45:09 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:22.841 02:45:09 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:22.841 02:45:09 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:22.841 02:45:09 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:22.841 + [[ -n 5029 ]] 00:34:22.841 + sudo kill 5029 00:34:23.113 [Pipeline] } 00:34:23.127 [Pipeline] // timeout 00:34:23.131 [Pipeline] } 00:34:23.144 [Pipeline] // stage 00:34:23.148 [Pipeline] } 00:34:23.161 [Pipeline] // catchError 00:34:23.168 [Pipeline] stage 00:34:23.170 [Pipeline] { (Stop VM) 00:34:23.181 [Pipeline] sh 00:34:23.464 + vagrant halt 00:34:26.007 ==> default: Halting domain... 00:34:32.670 [Pipeline] sh 00:34:32.962 + vagrant destroy -f 00:34:35.508 ==> default: Removing domain... 00:34:36.469 [Pipeline] sh 00:34:36.753 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:36.763 [Pipeline] } 00:34:36.778 [Pipeline] // stage 00:34:36.783 [Pipeline] } 00:34:36.797 [Pipeline] // dir 00:34:36.802 [Pipeline] } 00:34:36.815 [Pipeline] // wrap 00:34:36.821 [Pipeline] } 00:34:36.834 [Pipeline] // catchError 00:34:36.844 [Pipeline] stage 00:34:36.846 [Pipeline] { (Epilogue) 00:34:36.858 [Pipeline] sh 00:34:37.141 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:42.420 [Pipeline] catchError 00:34:42.422 [Pipeline] { 00:34:42.435 [Pipeline] sh 00:34:42.720 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:42.720 Artifacts sizes are good 00:34:42.728 [Pipeline] } 00:34:42.737 [Pipeline] // catchError 00:34:42.743 [Pipeline] archiveArtifacts 00:34:42.749 Archiving artifacts 00:34:42.834 [Pipeline] cleanWs 00:34:42.845 [WS-CLEANUP] Deleting project workspace... 00:34:42.846 [WS-CLEANUP] Deferred wipeout is used... 00:34:42.852 [WS-CLEANUP] done 00:34:42.854 [Pipeline] } 00:34:42.868 [Pipeline] // stage 00:34:42.873 [Pipeline] } 00:34:42.886 [Pipeline] // node 00:34:42.891 [Pipeline] End of Pipeline 00:34:42.960 Finished: SUCCESS